FLUKA User Guide
FLUKA User Guide
INFN TC 05/11
SLAC–R–773
12 October 2005
Fluka:
a multi-particle transport code
(Program version 2014)
GENEVA
2014
CERN–400 copies printed–October 2005
iii
Abstract
This report describes the 2014 version of the Fluka particle transport code. The first part introduces
the basic notions, describes the modular structure of the system, and contains an installation and beginner’s
guide. The second part complements this initial information with details about the various components of
Fluka and how to use them. It concludes with a detailed history and bibliography.
v
Preface
The INFN–CERN Collaboration Agreement for the Maintenance and Development of the Fluka
Code of December 2003 called for the publication of two major reports, focused respectively more on the
technical aspects (this report), and on the description of the physics models embedded into the Code (to be
published next year).
This document is a reference guide for the users of the Fluka particle transport code, so as to
provide them with a tool for the correct input preparation and use of the code, a description of the basic
code structure, and of the history of its evolution. In particular, this guide is intended to document the
2014 version of Fluka, an update of the 2005 version, which represented a fundamental evolution in Fluka
development with respect to the previous releases, from both the scientific and technical points of view. This
development is one of the main tasks carried out under the INFN–CERN Collaboration Agreement for the
Maintenance and Development of the Fluka Code of December 2003. Notwithstanding the efforts of the
authors, it is not possible to guarantee that this guide is free from error. Furthermore, since the Fluka
code is continually evolving, this guide will also evolve with time. The 2005 version of this document [91]
also represents the first and basic reference to be used in citation of Fluka, together with Ref. [41] (a full
list of References starts on page 406). In using the code, the user agrees on the authorship and copyright
and hence is bound to quote the above references.
This guide is organized as follows. After the Fluka Licence, which the user is required to read
carefully, Part I contains a summary description of what Fluka is and what its capabilities are (Chapter 1),
a Beginner’s Guide (Chapter 2), the Installation Guide (Chapter 3), and a list of the main modules (Fortran
files) of which Fluka is composed (Chapter 4).
Part II is the real User Guide, where the user can find a detailed description of what is needed to
run Fluka: particles and materials considered in the code (Chapter 5), the rules for building Fluka input
(Chapter 6), each single Fluka command and corresponding parameters (Chapter 7), the Geometry system
(Chapter 8), the standard Fluka output (Chapter 9), how to write the user routines (Chapter 13), etc.
At the end of Part II, the history of Fluka is reported, putting emphasis on the third generation of
the code evolution (from 1988 to the present version), giving more details about the code structure and all
the fundamental physics models used in Fluka.
It must be noted that early versions of the Fluka hadronic event generator as implemented in other
codes (in particular GEANT3) should be referenced as such (e.g. GEANT-FLUKA) and not as Fluka.
They have little in common with the present version and should be considered virtually obsolete. The
proper reference to GEANT-FLUKA is Ref. [68].
Please contact A. Ferrari or A. Fassò (see email addresses below) for any comment on or criticism
about this manual and/or the code.
The Fluka Authors:
INFN, Milan, Italy Giuseppe Battistoni, Francesco Broggi, Mauro Campanella, Silvia Muraro, Paola Sala
Italy Ettore Gadioli, Maria Vittoria Garzelli
Papa Giovanni 23mo Hospital, Bergamo, Italy Paolo Colleoni
INFN, Pavia, Italy Andrea Fontana
CNAO, Pavia, Italy Andrea Mairani, Maurizio Pelliccioni
INFN, Legnaro, Italy Lucia Sarchiapone
INFN, Bologna, and University of Bologna, Italy Annarita Margiotta, Maximiliano Sioli
Sapienza University, Rome, Italy Vincenzo Patera
University of Rome Tor Vergata, Italy Maria Cristina Morone
CERN EN-STI, Geneva, Switzerland Markus Brugger, Marco Calviani, Francesco Cerutti, Luigi Sal-
vatore Esposito, Rosario Esposito, Anatoli Fedynitch, Alfredo Ferrari, Ruben Garcia Alia,
Pablo Garcia Ortega, Anton Lechner, Carlo Mancini, Thanasis Manousos, Alessio Mereghetti,
Thiago Viana Miranda Lima, Ela Nowak, Philippe Schoofs, Nikhil Shetty, Lefteris Skordis,
George Smirnov, Vasilis Vlachoudis, Christina Weiss
CERN DGS-RP, Geneva, Switzerland Matteo Magistris, Stefan Roesler, Chris Theis, Heinz Vincke,
Helmut Vincke, Joachim Vollaire
ELI-Beamlines, Prague, Czech Republic Alberto Fassò, Roberto Versaci
ESS, Lund, Sweden Luisella Lari
SLAC, Stanford, USA Mario Santana Leitner
Jefferson Lab, Newport News, USA Pavel Degtiarenko, George Kharashvili
Houston University, USA Toni Empl, Son Hoang, Martin Kroupa, John Idarraga, Lawrence S. Pinsky
NASA, Houston, USA Amir Alexander Bahadori, Kerry Lee, Brandon Reddell, Edward Semones,
Nicholas Stoffle, Neal Zapp
TRIUMF, Vancouver, Canada Mary Chin, Mina Nozar, Michael Trinczek, Anne Trudel
Ludwig Maximilian University (LMU), Faculty of Physics, Munich, Germany Georgios Dedes,
Katia Parodi
LMU and Heidelberg University Hospital, Heidelberg, Germany Ilaria Rinaldi
Helmholtz-Zentrum Dresden-Rossendorf, Germany Anna Ferrari, Stefan Müller
Germany Johannes Ranft
Uppsala University, Sweden Mattias Lantz
Austrian Institute of Technology, Vienna, Austria Sofia Rollet, Andrej Sipaj
MedAustron, Austria Till Tobias Böhlen
PSI, Villigen, Switzerland Stefania Trovati
Switzerland Vittorio Boccone
This work has been supported by INFN and CERN in the framework of the Collaboration Agreement
for the development, maintenance and release of the Fluka software. This work was partially supported
under DOE contract DE-AC02-76-SF00515, NASA Grant NAG8-1901.
vii
viii
Depending on whether the Licensee requested and got approval for a User or Trial FLUKA version,
the Fluka User license or the Fluka Trial Version license applies respectively (see below). For commercial
user, ad hoc licenses are agreed, hence none of the licenses below applies.
1. Subject to the terms and conditions of this license, the Fluka Copyright Holders herewith grant to the
Licensee a worldwide, non-exclusive, royalty-free, source and object code license to use and reproduce
Fluka for internal scientific non-commercial non-military purposes only.
Notwithstanding the foregoing, the Licensee shall not execute Fluka in a manner that produces
an output whose contents are directly useable or easily employable to simulate the physics models
embedded within Fluka in a generic manner, or excise portions of Fluka source or object code, and
execute them independently of Fluka. Extracting specific isolated results from any of the individual
internal physics models embedded within Fluka is not permitted. Permitted use and reproduction
are referred to below as “Use”.
2. Modification (including translation) of Fluka, in whole or in part, is not permitted, except for modifi-
cation of Fluka User Routines that do not circumvent, replace, add to or modify any of the functions
of the Fluka core code. Permitted modifications are referred to below as “Modifications”.
3. Fluka is licensed for Use by the Licensee only, and the Licensee shall not market, distribute, transfer,
license or sub-license, or in any way make available (“Make Available”) Fluka or Modifications, in
whole or in part, to third parties, without prior written permission. The Licensee shall not assign or
transfer this license.
4. Notwithstanding section 3, the Licensee may Make Available his Modifications of Fluka User Routines
to third parties under these license conditions.
5. The Licensee shall not insert Fluka code or Modifications, in whole or in part, into other codes
without prior written permission.
ix
6. Any use of Fluka outside the scope of this license is subject to prior written permission.
GRANT BACK
7. The Licensee shall in a timely fashion notify to fcc@fluka.org any Modifications carried out by him.
Except for Authors, Collaborators, and employees of the Fluka Copyright Holders, the copyright
in whose Modifications shall automatically be vested in the Fluka Copyright Holders, the Licensee
herewith grants the Fluka Copyright Holders a perpetual, royalty-free, irrevocable and non-exclusive
license to his Modifications, with no limitation of use. The Licensee acknowledges that the Fluka
Copyright Holders may insert such Modifications into future releases of Fluka, subject to appropriate
acknowledgment of the Licensee’s contribution.
8. The Licensee shall report as soon as practical any errors or bugs found in any portion of Fluka to
fluka-discuss@fluka.org
PUBLICATIONS AND ACKNOWLEDGEMENT
9. The Licensee shall explicitly acknowledge his use of Fluka in any publication or communication, scien-
tific or otherwise, relating to such use, by citing the Fluka set of references (http://www.fluka.org,
see below) and the Fluka copyright notice.
10. The Licensee shall ensure that the Fluka set of references, the Fluka copyright notice and these
license conditions are not altered or removed from Fluka and that all embodiments of Fluka and
Modifications contain in full the Fluka set of references, the Fluka copyright notice, and these license
conditions.
11. Any insertion of Fluka code or Modifications, in whole or in part, into other codes with permission
under section 5 shall preserve the Fluka set of references, the Fluka copyright notice and these license
conditions in the Fluka code or Modifications concerned, and must also reproduce these within any
additional global notices included along or embedded within the software into which the Fluka code
or the Modifications have been integrated, in whole or in part. Any part of the Fluka code or
Modifications so inserted shall continue to be subject to these license conditions.
12. Publication of any results of comparisons of specific internal physics models extracted from Fluka
with permission under section 6 with data or with other codes or models is subject to prior written
permission.
13. Contributions to any formal code comparisons and validation exercises pertaining to Fluka, sponsored
by recognised bodies or within the framework of recognised conferences and workshops, are subject to
prior written permission.
WARRANTY AND LIABILITY
14. DISCLAIMER FLUKA IS PROVIDED BY THE FLUKA COPYRIGHT HOLDERS “AS IS” AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, IMPLIED
WARRANTIES OF MERCHANTABILITY, OF SATISFACTORY QUALITY, AND FITNESS FOR
A PARTICULAR PURPOSE OR USE ARE DISCLAIMED. THE FLUKA COPYRIGHT HOLDERS
AND THE AUTHORS MAKE NO REPRESENTATION THAT FLUKA AND MODIFICATIONS
THEREOF WILL NOT INFRINGE ANY PATENT, COPYRIGHT, TRADE SECRET OR OTHER
PROPRIETARY RIGHT.
15. LIMITATION OF LIABILITY THE FLUKA COPYRIGHT HOLDERS AND ANY CONTRIBUTOR
SHALL HAVE NO LIABILITY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, CONSE-
QUENTIAL, EXEMPLARY, PUNITIVE OR OTHER DAMAGES OF ANY CHARACTER INCLUD-
ING, WITHOUT LIMITATION, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES,
LOSS OF USE, DATA OR PROFITS, OR BUSINESS INTERRUPTION, HOWEVER CAUSED
AND ON ANY THEORY OF CONTRACT, WARRANTY, TORT (INCLUDING NEGLIGENCE),
PRODUCT LIABILITY OR OTHERWISE, ARISING IN ANY WAY OUT OF THE USE OF FLUKA,
EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES, AND THE LICENSEE SHALL
HOLD THE COPYRIGHT HOLDERS AND ANY CONTRIBUTOR FREE AND HARMLESS FROM
ANY LIABILITY, INCLUDING CLAIMS BY THIRD PARTIES, IN RELATION TO SUCH USE.
TERMINATION
x
16. This license shall terminate with immediate effect and without notice if the Licensee fails to comply
with any of the terms of this license, or if the Licensee initiates litigation against any of the Fluka
Copyright Holders or any contributors with regard to Fluka. It shall also terminate with immediate
effect from the date on which a new version of Fluka becomes available. In either case sections 14
and 15 above shall continue to apply to any Use or Modifications made under these license conditions.
Copyright Italian National Institute for Nuclear Physics (INFN) and European Organization for Nuclear
Research (CERN), 1989–2014.
All rights not expressly granted under this license are reserved.
Requests for permissions not granted under this license shall be addressed to the Fluka Collaboration
Committee, through fcc@fluka.org. Any permission may only be granted in writing.
This software results in particular from work performed by Alberto Fassò, Alfredo Ferrari, Johannes Ranft,
Paola Sala (the “Authors”), and their collaborators (the “Collaborators”).
INFN and CERN are the exclusive source of distribution of the code, bug fixes and documentation of the
Fluka software (Fluka website.
By installing, or otherwise using the Trial Version of Fluka software, you agree to be bound by the terms
of this Agreement
If you do not agree to the terms of this Agreement, you should refrain from installing or using the Trial
Version of Fluka software.
DEFINITIONS
The Licensors means both CERN and INFN.
The Licensee means any person or entity exercising any permission granted by this license.
The Fluka software (“Fluka”) means the last updated version of the fully integrated particle physics
Monte Carlo simulation software package being developed since 1989, available from the official Fluka
website http://www.fluka.org and authorised mirror sites. Fluka includes Fluka User Routines (as
defined below) and accompanying documentation. Output does not form part of Fluka as herein defined.
Fluka User Routines (“User Routines”) means the set of subroutines collected in the usermvax section of
Fluka and forming part of the standard distribution of Fluka.
The Trial Version of Fluka software (“Fluka Trial Version”) means a version of the Fluka software to
be used only to review, demonstrate and evaluate the Fluka software. The Fluka Trial Version has an
expiration date which is set out in the title of the Agreement at the sole discretion of the Licensors and
which starts as from the date of its downloading by the Licensee (“the Expiration Date”). The Fluka Trial
Version features a built in limitation that purposely impedes unlimited use of the software. It therefore runs
in a sub optimal way and limits the computational time to what is deemed adequate for the Purpose as
described below.
The Purpose should be defined as the willingness of the Licensee to use the Trial version of the Fluka
software in order to evaluate the suitability of Fluka software for commercial applications and to determine
whether to purchase Fluka under commercial license conditions.
Make Available means to market, distribute, transfer, license or sub-license, or in any way dispose of or
make available.
Output means results and data generated using Fluka Trial Version, or any procedure or algorithm making
use of results and data generated with Fluka Trial Version, but excludes Comparisons (as defined below).
xi
Comparisons means results of benchmark comparisons of the Fluka Trial Version physics models.
LICENSE GRANT
1. The Parties agree that the copyright and all other rights related to Fluka, in whatever form, including
but not limited to the object code, source code and user interface, are vested in Licensors, and that
Licensors retain all title, copyright and any other proprietary rights in Fluka.
2. All rights not expressly granted under this Agreement are reserved.
3. Subject to the terms and conditions of this Agreement, Licensors herewith grant to the Licensee a
non-exclusive, non-transferable object code license to use the Fluka Trial Version for the Purpose.
4. The scope of the license granted under this Agreement is strictly limited to the use of the Fluka Trial
Version for the Purpose and specifically excludes any scientific or commercial applications.
5. The Licensee shall under no circumstances circumvent the built in limitations contained in the Fluka
Trial version.
6. Fluka Trial Version is licensed for use by the Licensee only, at the Licensee’s site, and the Licensee
shall not Make Available Fluka Trial Version, in whole or in part, either separately or with a product
or service, to third parties, without Licensors’ prior written permission.
7. The Licensee may not modify, translate, decompile, disassemble, decrypt, extract, or otherwise reverse
engineer Fluka Trial Version, nor may the Licensee attempt to create the source code from the object
code of Fluka Trial Version. The Licensee may not insert Fluka Trial Version code, in whole or in
part, into other codes. The Licensee may not create derivative works of Fluka Trial Version. It is
understood that Output is not considered derivative works.
8. Without prejudice to article 5, the Licensee may modify Fluka User routines to the extent that the
purpose of the modifications is limited to the adaptation of input and output interfaces of Fluka.
Any such modifications are permitted only to the extent that they do not circumvent, replace, add to,
or modify any of the functions of Fluka, or extract specific isolated results from any of the individual
internal physics models embedded within Fluka.
9. The Licensee may use Fluka Trial Version to generate Output for the Purpose only. The Licensee
shall not Make Available such Output to third parties.
10. The Licensee may use Fluka Trial Version to generate Comparisons for the Purpose but may not
Make Available such Comparisons.
11. Any use of Fluka Trial Version outside the scope of articles 3 to 7 is subject to prior written permission
from Licensors. No Party may assign or transfer this Agreement to a third party.
12. The Licensee shall not use Fluka Trial Version, Output, or Comparisons for military purposes.
13. The Licensee shall not copy Fluka Trial Version, in whole or in part, for distribution purposes.
PAYMENT
15. The Licensors are under no obligation to correct any problems or errors of Fluka Trial Version.
WARRANTY AND LIABILITY
16. DISCLAIMER: FLUKA TRIAL VERSION IS PROVIDED BY LICENSORS “AS IS” AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, IMPLIED WAR-
RANTIES OF MERCHANTABILITY, OF SATISFACTORY QUALITY, AND FITNESS FOR A
PARTICULAR PURPOSE OR USE ARE DISCLAIMED. LICENSORS AND THE AUTHORS
xii
MAKE NO REPRESENTATION THAT FLUKA TRIAL VERSION WILL NOT INFRINGE ANY
PATENT, COPYRIGHT, TRADE SECRET OR OTHER PROPRIETARY RIGHT. LICENSORS
ARE NOT AWARE OF ANY FACTS THAT WOULD LEAD A REASONABLE PERSON TO BE-
LIEVE THAT USE OF FLUKA TRIAL VERSION WOULD INFRINGE THIRD PARTY RIGHTS.
THE LICENSEE ACKNOWLEDGES THAT LICENSORS AND THE AUTHORS HAVE NOT PER-
FORMED ANY SEARCHES OR INVESTIGATIONS INTO THE EXISTENCE OF ANY THIRD
PARTY RIGHTS THAT MAY AFFECT FLUKA TRIAL VERSION.
18. The Parties shall take all necessary measures to prevent any infringement of the terms of this Agree-
ment. The Licensee shall be liable to Licensors for any such infringement by the Licensee and shall
hold Licensors free and harmless and indemnify them for any and all claims or lawsuits which may
result there from.
DURATION AND TERMINATION
19. This Agreement shall enter into force on the day of its downloading by the Licensee.
20. This Agreement shall terminate and the license shall lapse after the Expiration date of the Fluka
Trial Version.
21. This Agreement may terminate if the Licensee fails to comply with any of the terms of this Agreement
such termination having been notified in writing by the Licensors and being effective within thirty (30)
days unless within that period the breach is remedied by the Licensee, or if the Licensee institutes
litigation against any of Licensors or any contributors with regard to Fluka Trial Version, without
any compensation being due by either Licensor to the Licensee.
22. In any case of termination of the Agreement the license shall lapse and the Licensee shall uninstall and
delete all copies of the Software, including the user documentation.
23. In case of termination of this Agreement for any reason whatsoever and at the request of either Party,
the other Party shall promptly return any confidential information belonging to the first Party.
24. Notwithstanding termination of the Agreement howsoever caused, its provisions shall continue to bind
the Parties in so far and for as long as may be necessary to give effect to their respective rights and
obligations accrued prior to termination.
25. The provisions of this Agreement shall be interpreted in accordance with its true meaning and effect.
Without prejudice to CERN’s status as an Intergovernmental Organization, reference shall be made
to Swiss substantive law where (i) a matter is not specifically covered by this Agreement; or (ii) a
provision is ambiguous or unclear. Such reference shall be made exclusively for the matter or provisions
concerned, and shall in no event apply to the other provisions of this Agreement.
26. Any dispute under this Agreement that fails to be settled amicably shall be referred to arbitration,
drawn up by CERN in accordance with its status as an Intergovernmental Organization, in accordance
with the procedure defined at: http://legal.web.cern.ch/procedures/arbitration. Notwith-
standing reference of any dispute to arbitration, the Parties shall continue to be bound by their
obligations under this Agreement.
xiii
Table of contents
1 Introduction 3
1.1 What is FLUKA? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 A quick look at FLUKA’s physics, structure and capabilities . . . . . . . . . . . . . . . . . . 3
1.2.1 Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1.1 Hadron inelastic nuclear interactions . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1.2 Elastic Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1.3 Nucleus-Nucleus interactions . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1.4 Transport of charged hadrons and muons . . . . . . . . . . . . . . . . . . . 5
1.2.1.5 Low-energy neutrons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1.6 Electrons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1.7 Photons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1.8 Optical photons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.1.9 Neutrinos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.2 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.4 Biasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.5 Optimisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.6 Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.7 Code structure, technical aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.8 Main differences between Fluka and earlier codes with same name . . . . . . . . . . 9
1.2.8.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3 Installation 35
3.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2 Installation instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2.1 Installation of the tar.gz packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2.2 Installation of the RPM package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3 Package content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.4 Pre-connected I/O files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
II User Guide 41
9 Output 303
9.1 Main output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
9.2 Scratch file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
9.3 Random number seeds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
9.4 Error messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
9.5 Estimator output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
9.5.1 DETECT output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
9.5.2 EVENTBIN output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
9.5.3 EVENTDAT output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
9.5.4 RESNUCLEi output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
9.5.5 USRBDX output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
9.5.6 USRBIN output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
9.5.7 USRCOLL output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
9.5.8 USRTRACK output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
9.5.9 USRYIELD output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
9.6 USERDUMP output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
9.7 RAY output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
xxi
References 406
Index 419
Part I
Introduction
1.1 What is FLUKA?
Fluka is a general purpose tool for calculations of particle transport and interactions with matter, covering
an extended range of applications spanning from proton and electron accelerator shielding to target de-
sign, calorimetry, activation, dosimetry, detector design, Accelerator Driven Systems, cosmic rays, neutrino
physics, radiotherapy etc.
The highest priority in the design and development of Fluka has always been the implementation
and improvement of sound and modern physical models. Microscopic models are adopted whenever possible,
consistency among all the reaction steps and/or reaction types is ensured, conservation laws are enforced at
each step, results are checked against experimental data at single interaction level. As a result, final predic-
tions are obtained with a minimal set of free parameters fixed for all energy/target/projectile combinations.
Therefore results in complex cases, as well as properties and scaling laws, arise naturally from the underlying
physical models, predictivity is provided where no experimental data are directly available, and correlations
within interactions and among shower components are preserved.
Fluka can simulate with high accuracy the interaction and propagation in matter of about 60 different
particles, including photons and electrons from 100 eV–1 keV to thousands of TeV, neutrinos, muons of any
energy, hadrons of energies up to 20 TeV (up to 10 PeV by linking Fluka with the Dpmjet code) and all
the corresponding antiparticles, neutrons down to thermal energies and heavy ions. The program can also
transport polarised photons (e.g., synchrotron radiation) and optical photons. Time evolution and tracking
of emitted radiation from unstable residual nuclei can be performed on line.
Fluka can handle even very complex geometries, using an improved version of the well-known Com-
binatorial Geometry (CG) package. The Fluka CG has been designed to track correctly also charged
particles (even in the presence of magnetic or electric fields). Various visualisation and debugging tools are
also available.
For most applications, no programming is required from the user. However, a number of user interface
routines (in Fortran 77) are available for users with special requirements.
The Fluka physical models are described in several journal and conference papers; on the techni-
cal side the stress has been put on four apparently conflicting requirements, namely efficiency, accuracy,
consistency and flexibility.
Efficiency has been achieved by having a frequent recourse to table look-up sampling and a systematic
use of double precision has had a great impact on overall accuracy: both qualities have benefited from a
careful choice of the algorithms adopted. To attain a reasonable flexibility while minimising the need for
user-written code, the program has been provided with a large number of options available to the user, and
has been completely restructured introducing dynamical dimensioning.
Another feature of Fluka, probably not found in any other Monte Carlo program, is its double
capability to be used in a biased mode as well as a fully analogue mode. That means that while it can
be used to predict fluctuations, signal coincidences and other correlated events, a wide choice of statistical
techniques are also available to investigate punchthrough or other rare events in connection with attenuations
by many orders of magnitude.
3
4 Introduction
1.2.1 Physics
The Fluka hadron-nucleon interaction models are based on resonance production and decay below a few
GeV, and on the Dual Parton model above. Two models are used also in hadron-nucleus interactions. At
momenta below 3–5 GeV/c the Peanut package includes a very detailed Generalised Intra-Nuclear Cascade
(GINC) and a preequilibrium stage, while at high energies the Gribov-Glauber multiple collision mechanism
is included in a less refined GINC. Both modules are followed by equilibrium processes: evaporation, fission,
Fermi break-up, gamma deexcitation. Fluka can also simulate photonuclear interactions (described by
Vector Meson Dominance, Delta Resonance, Quasi-Deuteron and Giant Dipole Resonance), electronuclear
interactions, photomuon production and electromagnetic dissociation. A schematic outline is presented
below:
– Inelastic cross sections for hadron-hadron interactions are represented by parameterised fits based on
available experimental data [142].
– For hadron-nucleus interactions, a mixture of tabulated data and parameterised fits is used [21, 129,
146, 147, 190].
– Elastic and charge exchange reactions are described by phase-shift analyses and eikonal approximation.
– Inelastic hadron-hadron interactions are simulated by different event generators, depending on energy:
Nuclear interactions generated by ions are treated through interfaces to external event generators.
– Above 5 GeV per nucleon: Dpmjet-II or Dpmjet-III [180], with special initialisation procedure.
– Between 0.125 and 5 GeV per nucleon: modified Rqmd (Relativistic Quantum Molecular Dynam-
ics) [191–193]
– Below 0.125 GeV per nucleon: Bme (Boltzmann Master Equation) [49, 51, 52]
An original treatment of multiple Coulomb scattering and of ionisation fluctuations allows the code to handle
accurately some challenging problems such as electron backscattering and energy deposition in thin layers
even in the few keV energy range.
Energy loss:
– Bethe-Bloch theory [27–29,38,39]. Barkas Z3 effect [19,20] and Bloch Z4 effect [38]. Mott correction to
the Rutherford scattering cross section [111, 139]. Improved ionisation potential, handling of porous
substances, ranging out particles below energy cutoff [72].
– Optional delta-ray production and transport with account for spin effects and ionisation fluctuations.
The present version includes a special treatment [73] which combines delta-ray production with properly
restricted ionisation fluctuations and includes corrections for particle spin and electrons/positrons
and “distant collision” straggling corrections (similar to Blunck-Leisegang ones). Original approach
making use of very general statistical properties of the problem. Within this framework “practical”
solutions have been implemented into the code with very satisfactory results. This approach exploits
the properties of the cumulants of distributions, and in particular of the cumulants of the distribution
of Poisson distributed variables.
– Shell and other low-energy corrections derived from Ziegler [209].
– Ionisation potentials and density effect parameters according to Sternheimer, Berger and Seltzer [195].
– Non-ionising energy losses (NIEL) [111, 196]
– Displacements Per Atom (DPAs) [78]
– Special transport algorithm, based on Molière’s theory of multiple Coulomb scattering improved by
Bethe [30, 137, 138], with account of several correlations:
– between lateral and longitudinal displacement and the deflection angle
– between projected angles
– between projected step length and total deflection
– Accurate treatment of boundaries and curved trajectories in magnetic and electric fields
– Automatic control of the step
– Path length correction
– Spin-relativistic effects at the level of the second Born approximation [81].
– Nuclear size effects (scattering suppression) on option (simple nuclear charge form factors are imple-
mented, more sophisticated ones can be supplied by the user).
– Fano correction for heavy charged particle multiple scattering.
– Single scattering: algorithm based on the Rutherford formula with a screening factor in the form used
by Molière (for consistency with the multiple scattering model used by Fluka), integrated analytically
without any approximation. Nuclear form factors and spin-relativistic corrections at the first or second
Born approximation level accounted for by a rejection technique.
– Correction for cross section variation with energy over the step.
– Bremsstrahlung and electron pair production at high energy by heavy charged particles, treated as a
continuous energy loss and deposition or as discrete processes depending on user choice.
– Muon photonuclear interactions, with or without transport of the produced secondaries.
6 Introduction
For neutrons with energy lower than 20 MeV, Fluka uses its own neutron cross section library (P5 Legendre
angular expansion, 260 neutron energy groups) containing more than 250 different materials, selected for
their interest in physics, dosimetry and accelerator engineering and derived from the most recently evaluated
data:
Transport:
For nuclei other than hydrogen, kerma factors are used to calculate energy deposition (including from low-
energy fission). For details about the available materials, group structure etc., see Chap. 10.
1.2.1.6 Electrons
– Fluka uses an original transport algorithm for charged particles [81], including a complete multiple
Coulomb scattering treatment giving the correct lateral displacement even near a boundary (see hadron
and muon transport above).
– The variations with energy of the discrete event cross sections and of the continuous energy loss in
each transport step are taken into account exactly.
– Differences between positrons and electrons are taken into account concerning both stopping power
and bremsstrahlung [114].
– The bremsstrahlung differential cross sections of Seltzer and Berger [188, 189] have been extended to
include the finite value at “tip” energy, and the angular distribution of bremsstrahlung photons is
sampled accurately.
– The Landau-Pomeranchuk-Migdal suppression effect [119, 120, 126, 127] and the Ter-Mikaelyan polar-
isation effect in the soft part of the bremsstrahlung spectrum [198] are also implemented.
– Electrohadron production (only above ρ mass energy 770 MeV) via virtual photon spectrum and Vector
Meson Dominance Model [132]. (The treatment of the latter effect has not been checked with the latest
versions, however).
– Positron annihilation in flight and at rest
– Delta-ray production via Bhabha and Møller scattering
Note: the present lowest transport limit for electrons is 1 keV. Although in high-Z materials the Molière
multiple scattering model becomes unreliable below 20-30 keV, a single-scattering option is available which
allows to obtain satisfactory results in any material also in this low energy range.
The minimum recommended energy for primary electrons is about 50 to 100 keV for low-Z materials
and 100-200 keV for heavy materials, unless the single scattering algorithm is used. Single scattering trans-
port allows to overcome most of the limitations at low energy for the heaviest materials at the price of some
increase in CPU time.
1.2.1.7 Photons
Note: the present lowest transport limit for photons is 100 eV. However, fluorescence emission may be
underestimated at energies lower than the K-edge in high-Z materials, because of lack of Coster-Kronig
effect.
The minimum recommended energy for primary photons is about 1 keV.
– Generation and transport (on user’s request) of Cherenkov, Scintillation and Transition Radiation.
– Transport of light of given wavelength in materials with user-defined optical properties.
1.2.1.9 Neutrinos
– Electron and muon (anti)neutrinos are produced and tracked on option, without interactions.
– Neutrino interactions however are implemented, but independently from tracking.
1.2.2 Geometry
A part of the code where efficiency, accuracy, consistency and flexibility have combined giving very effective
results is the Fluka geometry. Derived from the Combinatorial Geometry package, it has been entirely
rewritten. A completely new, fast tracking strategy has been developed, with special attention to charged
particle transport, especially in magnetic fields. New bodies have been introduced, resulting in increased
rounding accuracy, speed and even easier input preparation.
– Combinatorial Geometry (CG) from Morse [60], with additional bodies (infinite circular and elliptical
cylinder parallel to X,Y,Z axis, generic plane, planes perpendicular to the axes, generic quadrics).
– Possibility to use body and region names instead of numbers.
– Possibility of using body combinations inside nested parentheses .
– Geometry directives for body expansions and roto-translation transformations
– Distance to nearest boundary taken into account for improved performance.
– Accurate treatment of boundary crossing with multiple scattering and magnetic or electric fields.
– The maximum number of regions (without recompiling the code) is 10000.
– The tracking strategy has been substantially changed with respect to the original CG package. Speed
has been improved and interplay with charged particle transport (multiple scattering, magnetic and
electric field transport) has been properly set.
– A limited repetition capability (lattice capability) is available. This allows to avoid describing repetitive
structures in all details. Only one single module has to be described and then can be repeated as many
8 Introduction
times as needed. This repetition does not occur at input stage but is hard-wired into the geometry
package, namely repeated regions are not set up in memory, but the given symmetry is exploited
at tracking time using the minimum amount of bodies/regions required. This allows in principle to
describe geometries with even tens of thousands regions (e.g., spaghetti calorimeters) with a reasonable
number of region and body definitions.
– Voxel geometry is available on option, completely integrated into CG.
Special options:
– Geometry debugger.
– Plotting of selected sections of the geometry, based on the Ispra Plotgeom program.
– Pseudo-particle RAY to scan the geometry in a given direction.
1.2.3 Transport
– Condensed history tracking for charged particles, with single scattering option.
– Time cutoff.
– Legendre angular expansion for low-energy neutron scattering.
– Transport of charged particles in magnetic and electric fields.
Transport limits:
1.2.4 Biasing
– Leading particle biasing for electrons and photons: region dependent, below user-defined energy thresh-
old and for selected physical effects.
– Russian Roulette and splitting at boundary crossing based on region relative importance.
– Region-dependent multiplicity reduction in high energy nuclear interactions.
– Region-dependent biased downscattering and non-analogue absorption of low-energy neutrons.
– Biased decay length for increased daughter production.
– Biased inelastic nuclear interaction length.
– Biased interaction lengths for electron and photon electromagnetic interactions
– Biased angular distribution of decay secondary particles.
– Region-dependent weight window in three energy ranges (and energy group dependent for low-energy
neutrons).
– Bias setting according to a user-defined logics
– User-defined neutrino direction biasing
– User-defined step by step importance biasing
A quick look at Fluka 9
1.2.5 Optimisation
1.2.6 Scoring
– The whole program, including the numerical constants, is coded in double precision (at least the
versions for 32-bit word machines). The only exceptions are the low-energy neutron cross sections,
which are stored in single precision to save space.
– Consistent use of the latest recommended set of physical constant values [142].
– Dynamical memory allocation is implemented as far as possible.
– Extensive use of INCLUDE and of constant parameterisation
– 64-bit random number generator [125]
1.2.8 Main differences between Fluka and earlier codes with same name
The history of Fluka, spanning more than 40 years, is narrated in detail in Chap. 18. It is possible to
distinguish three different generation of “Fluka” codes along the years, which can be roughly identified as
the Fluka of the ’70s (main authors J. Ranft and J. Routti), the Fluka of the ’80s (P. Aarnio, A. Fassò,
H.-J. Möhring, J. Ranft, G.R. Stevenson), and the Fluka of today (A. Fassò, A. Ferrari, J. Ranft and
P.R. Sala). These codes stem from the same root and of course every new “generation” originated from
the previous one. However, each new “generation” represented not only an improvement of the existing
program, but rather a quantum jump in the code physics, design and goals. The same name “Fluka” has
10 Introduction
been preserved as a reminder of this historical development — mainly as a homage to J. Ranft who has been
involved in it as an author and mentor from the beginning until the present days — but the present code is
completely different from the versions which were released before 1990, and in particular from the last one
of the second generation, Fluka87 [3, 4].
Major changes and additions have affected the physical models used, the code structure, the tracking
strategy and scoring. Important additions, such as a wider range of biasing possibilities and some specialised
tools for calorimeter simulation, have extended the field of its possible applications.
An exhaustive description of all these changes and new features along the years is reported in Chap. 18.
However, the best gauge of the program evolution is probably the widening of the application fields, and the
boost of its recognition and diffusion all over the world.
1.2.8.1 Applications
While Fluka86-87 was essentially a specialised program to calculate shielding of high energy proton accel-
erators, the present version can be regarded as a general purpose tool for an extended range of applications.
In addition to traditional target design and shielding, applications are now spanning from calorimetry to
prediction of activation, radiation damage, isotope transmutation, dosimetry and detector studies.
Prediction of radiation damage has always been a traditional field of application of Fluka, restricted
however in earlier versions to hadron damage to accelerator components. The new capability to deal with
the low-energy neutron component of the cascade has extended the field of interest to include electronics
and other sensitive detector parts. In addition, radiation damage calculations and shielding design are not
limited to proton accelerators any longer, but include electron accelerators of any energy, photon factories,
and any kind of radiation source, be it artificial or natural.
The present version of Fluka has been used successfully in such diverse domains as background studies
for underground detectors, cosmic ray physics, shielding of synchrotron radiation hutches, calculation of dose
received by aircraft crews, evaluation of organ dose in a phantom due to external radiation, detector design
for radiation protection as well as for high energy physics, electron, proton and heavy ion radiotherapy,
nuclear transmutation, neutrino physics, shielding of free-electron lasers, calculation of tritium production
at electron accelerators, energy amplifiers, maze design for medical accelerators, etc.
Chapter 2
Fluka reads user input from an ASCII “standard input” file with extension .inp. The general characteristics
and rules of Fluka input are described in Chapter 6. The input consists of a variable number of “commands”
(called also “options”), each consisting of one or more “lines” (called also “cards” for historical reasons).
Apart from Fluka commands, the input file may contain also the description of the geometry of the
simulated set-up. Also this description is provided by means of specific geometry “command cards” in a
special format described in Chapter 8.
The geometry description can, on request, be kept in a separate ASCII file: this feature is especially
useful when the same geometry is used in several different inputs, not only to save space but because
modifications can be made in one single place.
11
12 Input alignment
In addition, special commands are available in Fluka for more advanced problems involving magnetic fields,
time-dependent calculations, writing of history files (so-called “collision tapes”), transport of optical photons,
event by event scoring, calling user-written routines, etc. These options are expected to be requested only
by users having some previous experience with the more common commands: therefore they will be mostly
ignored in this beginner’s guide.
Let’s first recall the general structure of the Fluka command lines (cards). The geometry commands
will be reviewed later. Each card contains:
– one keyword,
– six floating point values (called WHATs),
– one character string (called SDUM)
Some WHATs represent numerical quantities (e.g. energy, coordinates), while others, converted to integers,
are indices corresponding to a material, a type of particle, a region etc. In this latter case, it is possible to
replace the number by the corresponding name (a character string).
Not necessarily all WHATs and SDUMs are used. In some cases, a command line can be followed by a
line of text (for instance a filename path or a title). Any line having an asterisk (*) in the first position is
treated as a comment. All lines (commands, text strings and comments) are echoed on the standard output
(the file with extension .out). In case of problems, it is a good idea to check how every line has been printed
in the standard output. Often, output reveals typing or format errors by showing how the program has
misinterpreted them.
In addition to the simple echo, an “interpreted” feedback to all commands is provided in the next
section of the standard output. Checking this part of the output is also very useful, because it helps making
sure that the user’s intentions have been expressed correctly and understood by the code. See Chapter 9 on
Fluka output for a detailed description.
If a line contains an exclamation mark (!) all following characters are replaced by blanks. This feature
can be used for short in-line comments which will not be echoed in output.
The order of input commands is generally free, with only a few exceptions reported in Chapter 6.
Therefore, the order suggested in the following should not be considered as mandatory, but only one of the
possible ways to write Fluka input.
Be careful to properly align keywords and numbers. Format is not free, unless a command is issued at the
beginning of input: see option GLOBAL in section 7.32 or FREE in section 7.28). Even in the free format for
the input file, the part of the input describing the geometry can still be written in fixed format (which is
different from the general Fluka input format, see Chapter 8). There is the possibility of having free format
also for the geometry part: this can also be activated using the GLOBAL command (see 7.32).
In fixed format, in order to ensure proper alignment, a comment line showing a scale can be used
anywhere in the input file, for instance:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
In numerical fields, blanks are treated as zero. Therefore, numbers written with a decimal period and without
an exponent (e.g., 1.2345) can be placed anywhere inside their respective format fields. For instance, the
following two input lines are equally acceptable:
If a number is written in exponential notation (e.g., 2.3E5) or in integer form (without decimal point), it
must be aligned to the right of the field. Depending on the platform and the compiler, sometimes the number
is correctly interpreted even if the alignment rule is not respected. However this is not guaranteed and
the right alignment rule should always be followed. For instance in:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
BEAM 2.E2 0.2 1.5 1.2 0.7 1. PROTON
the first value might be interpreted as 2.E200. Another case is the following:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
BEAM 200. 0.2 1.5 1.2 0.7 1 PROTON
here the last numerical field would be interpreted as 1000000000. To avoid mistakes due to this kind of
input errors, the present Fluka versions now recognise such potential problems and the program is forced
to stop. At the same time a message is printed on the standard output file, as shown here for the above
example:
*** The 6th field 1 of the following input card ***
BEAM 200. 0.2 1.5 1.2 0.7 1 PROTON
*** does not contain a valid formatted fortran real number!!! ***
*** It is ambiguous and it could be read differently on different compilers ***
*** depending how strictly the blank=0 formatted input rule is implemented ***
Keywords (character strings such as BEAM and PROTON) must be aligned to the left of their field and must
be in upper case. An exception is the continuation character “ & ” used in some commands, which can be
placed anywhere within its 10-characters field.
Non-numerical values of WHATs (”names”) can be aligned anywhere within the corresponding fixed-format
fields.. Sometimes a special option requires a region or a particle number to be entered with a negative sign:
in this case the equivalent name must also be entered preceded by a minus sign.
Let us now consider a simple starting application. We want to calculate the charged pion fluence produced
by a monochromatic proton beam of momentum 50 GeV/c impinging on a 5 cm thick beryllium target of
simple shape: a small parallelepiped (20 cm × 20 cm × 5 cm). A further simplification will be made for the
purpose of this example, neglecting all the surrounding environment and substituting it with ideal vacuum.
We will guide the reader through the different parts of a possible input file suited for this application.
The information which follows is meant to serve as a guide, but does not cover all the important points. It
is recommended that for each option card selected, the user read carefully the relevant manual entry, and
especially the explanatory notes.
Typically, an input file begins with a TITLE card (p. 240) followed by one line of text describing the problem,
or identifying that particular run, or providing some kind of generic information. In our case, for example
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
TITLE
Charged pion fluence inside and around a proton-irradiated Be target
Further information can be provided by comment cards, but the title, because it is printed at the top of the
standard output, is a useful reminder of what the corresponding file is about. In addition, the title is printed
in all separate output files (error file, estimator files etc.) together with the date and time of the Fluka run,
allowing to keep track of their origin.
Commands such as GLOBAL and DEFAULTS, if needed, must immediately follow. Since this is intended
as a beginner’s guide, these can be ignored here (for most common problems the defaults provided are suffi-
cient). Let us recall that without specifying a DEFAULTS value, the NEW–DEFAults set of Fluka parameters
is loaded (see p. 92).
14 Geometry
All “events” or “histories” are initiated by primary particles, which in the simplest case are monoenergetic,
monodirectional and start from a single point in space (pencil beam). The card BEAM (p. 71) defines
the particle energy (or momentum) while the card BEAMPOS (p. 76) controls their starting position and
direction. These two commands can be used also to define particle beams having a simple angular or
momentum distribution (Gaussian or rectangular) , or a simple transverse profile (Gaussian, rectangular or
annular), or a simple space distribution of starting points (spherical, cartesian or cylindrical shell). Isotropic
or semi-isotropic angular emission can be described as a special case of an angular rectangular distribution.
For particle sources with more complex distributions in energy, space and direction, the user must
write, compile and link a special routine (SOURCE), following the instructions given in 13.2.19, and input a
card SOURCE (p. 228).
A summary of the input data concerning primary particles is printed in the standard output under
the title “Beam properties”.
The beam definition for our example can be the following (monochromatic, monodirectional proton
beam of momentum 50 GeV/c):
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
BEAM 50.E+00 PROTON
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
BEAMPOS 0.0 0.0 -50.0
In the cartesian geometry used by Fluka, the previous card means that the beam is injected at x, y, z
coordinates: 0, 0, -50 cm and is directed along the positive z axis. Of course, the choice of the point of
injection, the direction, etc., must be coherent with the geometrical description of the set-up, as discussed
in the following section.
The input for the Combinatorial Geometry (bodies, regions and optional region volumes, see respectively
8.2.4, 8.2.7, 8.2.9) must be immediately preceded by a GEOBEGIN card (see 7.30 and 8.2.1) and immediately
followed by a GEOEND card (see 7.31 and 8.2.11). These two cards follow the normal Fluka format. It is
recalled that the format for the geometry has its own special rules, described in Chapter 8.
Comment lines in the geometry input have an asterisk in first position as in the rest of Fluka input
(but on-line comments are not allowed). Body numerical data can be written in two different formats, a
“short” one (field length 10) and a “long” one (field length 22). The latter one is to be preferred when
higher precision is needed, for instance when using bodies such as truncated cones, cylinders or planes not
aligned with axes. It must be realised that using too few decimals can cause geometry errors when bodies
are combined into regions (portions of space not defined or doubly defined).
The whole geometry must be surrounded by a region of “blackhole” limited by a closed body (generally
an RPP parallelepiped). It is often a good idea to make this body much larger than the minimum required
size: this makes easier to introduce possible future extensions. In some cases, as in our basic example, it is
also useful to surround the actual geometry by a region of ideal vacuum, and to have the blackhole region
surrounding the vacuum. This can be useful, for instance, in order to start the trajectory of the primary
particles outside the physical geometry (a particle may not be started on a boundary).
Both the body input section and the region input section must be ended with an END card (see 8.2.6
and 8.2.8). Optionally, region volumes can be input between the region END card and the GEOEND card (this
option can be requested by setting a special flag in the Geometry title card, see 8.2.2). The only effect of
specifying region volumes is to normalise per cm3 the quantities calculated via the SCORE option (see below):
Beginner’s guide 15
for other estimators requiring volume normalisation the volume is input as part of the detector definition
(USRTRACK, USRCOLL, USRYIELD), or is calculated directly by the program (USRBIN).
The GEOEND card indicates the end of the geometry description, but can also be used to invoke the
geometry debugger.
The geometry output is an expanded echo of the corresponding input, containing information also on
memory allocation and on the structure of composite regions made of several sub-regions by means of the
OR operator.
A possible realisation of the geometry set up for our basic example can be seen in Fig. 2.1.
RPP #3 XYP #4
Only four bodies are used here: an RPP body (Rectangular Parallelepiped, body no. 3, see 8.2.4.1) to
define a volume which will be the Be target region, inside another larger RPP body (no. 2), which will be filled
with ideal vacuum and in turn is contained inside another larger RPP body (no. 1), to define the blackhole
region. The fourth body, an XYP half-space (defined by a plane perpendicular to the z axis, see 8.2.4.10),
will be used to divide the target into 2 different regions: the upstream half will be defined as the portion
of body 3 contained inside the half-space, and the downstream half as the portion outside it. Therefore,
region “3” (the upstream half of the target) is the part of body no. 3 which is also inside body 4, while
region “4” (downstream half of the target) is the part of body no. 3 which is not inside body 4. Region “2”
(the vacuum around the target) is defined as the inside of body no. 2 from which body no. 3 is subtracted.
Region “1” is simply the interior of body no. 1 from which body no. 2 is subtracted.
16 Materials
Note that bodies and regions can be identified by numbers, as described above, or with names (al-
phanumeric strings). The latter option is recommended, since it makes the preparation of the geometry
much easier, especially if free format is also chosen. Here below we will show both possibilities.
The beam starting point has been chosen so that it is in the vacuum, outside the target region.
The geometry part of the input file can then be written as:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
GEOBEGIN COMBINAT
0 0 A simple Be target inside vacuum
RPP 1-5000000.0+5000000.0-5000000.0+5000000.0-5000000.0+5000000.0
RPP 2-1000000.0+1000000.0-1000000.0+1000000.0 -100.0+1000000.0
RPP 3 -10.0 +10.0 -10.0 +10.0 0.0 +5.0
* plane to separate the upstream and downstream part of the target
XYP 4 2.5
END
* black hole
BH1 5 +1 -2
* vacuum around
VA2 5 +2 -3
* Be target 1st half
BE3 5 +3 +4
* Be target 2nd half
BE4 5 +3 -4
END
GEOEND
2.3.7 Materials
Each geometry region is supposed to be filled with a homogeneous material, or with vacuum, or with
“blackhole”. The latter is a fictitious material used to terminate particle trajectories: any particle is discarded
when reaching a blackhole boundary. Materials can be simple elements or compounds, where an element can
have either natural composition or consist of a single nuclide, and compound indicates a chemical compound
or a mixture or an alloy (or an isotopic mixture) of known composition.
An element can be either pre-defined (see list in Table 5.3 on page 47) or defined by a MATERIAL
card (p. 163) giving its atomic number, atomic weight, density, name and a material identification number
> 2. The number of a non-pre-defined material can be chosen by the user, with the restriction that all lower
numbers must also be defined (but not necessarily used). However, in a name-based input, it is convenient
Beginner’s guide 17
to leave the number blank: in this case the user does not need to know the number, which is assigned
automatically by the code and will be used only internally.
Number 1 is reserved for blackhole and 2 for ideal vacuum. There are 25 pre-defined materials; but
each of the numbers from 3 to 25 can be redefined, overriding the default definition. However, if the input
is explicitly number-based only (via command GLOBAL), a pre-defined material can only be redefined using
the same name by explicitely assigning to it a number equal to the original one . If the input is name-based,
it is better to leave the number blank.
Materials can also be defined with higher index numbers, provided no gaps are left in the numbering
sequence. For instance a material cannot be defined to have number 28 unless also 26 and 27 have been
defined. Again, assigning explicitely a number is not necessary if the input is fully name-based.
A compound is defined by a MATERIAL card plus as many COMPOUND cards (p. 84) as needed to
describe its composition. The MATERIAL card used to define a compound carries only the compound name,
density and, if input is explicitly number-based only, material number (atomic number and atomic weight
having no meaning in this case). Some special pre-defined compounds are available: for them the stopping
power is not calculated by a formula but is determined using the parameters recommended by ICRU [109].
Reference to these pre-defined compounds is normally by name and no MATERIAL and COMPOUND cards are
needed: if the input is explicitly number-based only (via command GLOBAL), a number needs to be assigned
to them using a MATERIAL card, of course leaving no gaps in the numbering sequence.
Materials pre-defined or defined in the standard input are referred to as “Fluka materials”, to
distinguish them from materials available in the low-energy neutron cross section library (called “low-energy
cross section materials”).
When transport of low-energy neutrons (E < 20 MeV) is requested (explicitly or by the chosen de-
faults), a correspondence is needed between each elemental (i.e., not compound) “Fluka material” and one
of the “low-energy cross section materials” available in the Fluka low-energy neutron library. The default
correspondence is set by the name: and if more than one material with that name exist in the neutron
library, the first in the list with that name (see 10.3, p. 325) is assumed by default. The default can be
changed using command LOW–MAT.
In the case of our example, only Beryllium is necessary, apart from blackhole and vacuum. In principle,
since Beryllium is one of the pre-defined Fluka materials, this part could even be omitted. However, for
pedagogical reasons the following card is proposed, where index 5 is assigned to the target material:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
MATERIAL 4.0 0.0 1.848 5.0 BERYLLIU
Notice that the chosen name is BERYLLIU and not BERYLLIUM, in order to match the name in the list of
“low-energy cross section materials” for low-energy neutrons (see Table 10.3), where material names have a
maximum length of 8 characters.
The standard output concerning materials is very extended. First a list of multiple scattering parame-
ters (printed by subroutine MULMIX) is reported for each material requested. This is mostly of scarce interest
for the normal user, except for a table giving for each requested material the proportion of components, both
by number of atoms (normalised to 1) and by weight (normalised to material density). The same information
is repeated later on in another table entitled Material compositions.
If low-energy neutron transport has been requested (explicitly or by a chosen default), the following
section reports the relevant material information: in particular, a table entitled “Fluka to low en. xsec
material correspondence” specifying which material in the neutron cross section library has been mapped
to each input material. Note that a much more detailed cross section information can be obtained by setting
a printing flag (WHAT(4)) in the LOW–NEUT command.
The Table Material compositions contains information about the requested materials, those pre-
defined by default and the elements used to define compounds. In addition to effective atomic number and
atomic weight, density and composition, the table shows the value of some typical quantities: inelastic and
18 Thresholds
elastic scattering length for the beam particles (not valid for electron and photon beams), radiation length
and inelastic scattering length for 20 MeV neutrons.
The next table contains material data related to stopping power (average excitation energy, density
effect parameters, and pressure in the case of gases) plus information about the implementation of various
physical effects and the corresponding thresholds and tabulations.
The last material table is printed just before the beginning of the history calculations, and concerns
the
Correspondence of regions and EMF-FLUKA material numbers and names.
A material must be associated to each of the geometry regions, except to those defined as blackhole. This
is done in a very straightforward way by command ASSIGNMAt. Assigning explicitly blackhole to a region is
allowed, but is not necessary (except for region 2) because a region is blackhole by default unless another
material has been associated to it. (Region 2, if not assigned a material, is COPPER by default).
The table entitled Regions: materials and fields, in the standard output, can be consulted to
check that material assignment has been done as desired.
The same material assignments in a name-based input would be the following (BLCKHOLE and VACUUM are
the pre-defined names of material 1 and 2):
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
* Be target, 1st and 2nd half
ASSIGNMAT BERYLLIU UpstrBe DwnstrBe
* External Black Hole
ASSIGNMAT BLCKHOLE Blckhole
* Vacuum
ASSIGNMAT VACUUM Vacarund
The implicit NEW–DEFA default setting, adopted in the example, sets, among other things, the production
and transport threshold of heavy particles to 10 MeV. Production thresholds for e+ e− and photons must be
explicitly set using the EMFCUT command for all materials in the problem, as described in detail on p. 113.
Let us choose also in this case a 10 MeV threshold for the single material of the example (previously marked
with material index 5 and name BERYLLIU). Following the instructions about the EMFCUT option, the card
can be written as:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
EMFCUT -0.010 0.010 1.0 5.0 PROD-CUT
where the first numerical field is the threshold for e+ e− (the minus sign meaning kinetic energy) and the
second is for photons. The material number is given in the fourth numerical field. For details on all
other parameters, and for other possibilities (for example how to introduce a transport cutoff different from
production threshold) the user must accurately consult the Notes in Sec. 7.19.
In a name-based input, the above card could be:
Beginner’s guide 19
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
EMFCUT -0.010 0.010 1.0 BERYLLIU PROD-CUT
Production and transport threshold for all other particles can be overwritten using the PART–THRes command
(p. 196).
Even though, for setting-up purposes, it is conceivable that no detector be requested in a preliminary run,
in most cases Fluka is used to predict the expectation value of one or more quantities, as determined by
the radiation field and by the material geometry described by the user: for such a task several different
estimators are available. The quantities which are most commonly scored are dose and fluence, but others
are available. Dose equivalent is generally calculated from differential fluence using conversion coefficients.
The simplest estimator available to the user is a historical vestige, survived from the “ancient” Fluka
(pre-1988) where the only possible output quantities were energy deposition and star density in regions. It
is invoked by option SCORE (p. 226), requesting evaluation of one to four different quantities. These can be
different forms of energy density (proportional to dose), or of star density (approximately proportional to
fluence of selected high-energy hadrons).
For this estimator, the detectors are pre-determined: the selected quantities are reported for each
region of the geometry. The corresponding results, printed in the main output immediately after the last
history has been completed, are presented in 6 columns as follows:
region region region first second third fourth
number name volume quantity quantity quantity quantity
1 ...... ...... ........ ........ ........ ........
2 ...... ...... ........ ........ ........ ........
etc.
on a line for each geometry region. The region volumes (in cm3 ) have the value 1.0, or values optionally
supplied by the user at the end of the geometry description (see 8.2.2). All other columns are normalised
per region volume and per primary particle.
Other estimators are more flexible: the corresponding detectors can be requested practically in any number,
can be written as unformatted or text files, and in most cases can provide differential distributions with
respect to one or more variables. On the other hand, their output is presented in a very compact array form
and must generally be post-processed by the user. For this purpose several utility programs are available:
but output in text format can even be exported to a spreadsheet for post-processing.
USRBDX (see p. 246) is the command required to define a detector for the boundary-crossing estimator.
It calculates fluence or current, mono- or bi-directional, differential in energy and angle on any boundary
between two selected regions. The area normalisation needed to obtain a current in particles per cm2 is
performed using an area value input by the user: if none is given, the area is assumed to be = 1.0 cm2
and the option amounts simply to counting the total number of particles crossing the boundary. Similarly if
fluence is scored, but in this case each particle is weighted with the secant of the angle between the particle
trajectory and the normal to the boundary surface at the crossing point.
20 Estimators and detectors
This is one of the estimators proposed for our example. We will request two boundary crossing
detectors, one to estimate fluence and one for current, of particles crossing the boundary which separates
the upstream and the downstream half of the target.
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
* Boundary crossing fluence in the middle of the target (log intervals, one-way)
USRBDX 99.0 209.0 -47.0 3.0 4.0 400. piFluenUD
USRBDX +50.0 +50.0 0.0 10.0 &
*
* Boundary crossing current in the middle of the target (log intervals, one-way)
USRBDX -1.0 209.0 -47.0 3.0 4.0 400. piCurrUD
USRBDX +50.00 +50.0 0.0 10.0 &
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
USRBDX 99.0 PIONS+- -47.0 UpstrBe DwnstrBe 400. piFluenUD
USRBDX +50.0 +50.0 0.0 10.0 &
*
USRBDX -1.0 PIONS+- -47.0 UpstrBe DwnstrBe 400. piCurrUD
USRBDX +50.00 +50.0 0.0 10.0 &
According to the instructions reported in Sec. 7.76, it can be seen that the combined fluence of π + and
π − is requested only when particles exit region “3” (“UpstrBe”, the upstream half of the target) to enter
into region “4” (“DwnstrBe”, the downstream half). There is no interest in the reverse, therefore “one-way
scoring” is selected. The scoring of the first detector will be inverse cosine-weighted, in order to define
correctly the fluence. Results will be written unformatted on unit 47 for both quantities (so there will be
two “Detectors” on the same output unit, but this is not mandatory). The energy distribution is going to
be binned in 50 logarithmic intervals, from 0.001 GeV (the default minimum) up to 50 GeV. The angular
distribution will be binned into 10 linear solid angle intervals from 0. to 2π (having chosen the one-way
estimator). The results will be normalised dividing by the area of the boundary (separation surface between
the two regions, in this case the transverse section of the target), and will provide a double-differential fluence
or current averaged over that surface (in cm−2 GeV−1 sr−1 ).
Other fluence scoring options, based respectively on a track-length and on a collision estimator, are
USRTRACK (see p. 262) and USRCOLL (p. 257) which request the estimation of volume-averaged fluence
(differential in energy) for any type of particle or family of particles in any selected region. The volume
normalisation needed to obtain the fluence as track-length density or collision density is performed using a
volume value input by the user: if none is given, the volume is assumed to be = 1.0 cm3 and the result
will be respectively the total track-length in that region, or the total number of collisions (weighted with the
mean free path at each collision point).
Note that if additional normalisation factors are desired (e.g., beam power), this can be achieved by
giving in input the “volume” or “area” value multiplied or divided by those factors. Options USRTRACK,
USRCOLL and USRBDX can also calculate energy fluence, if the “particle” type is set = 208.0 (energy, name
ENERGY) or 211.0 (electron and photon energy, name EM-ENRGY).
In our example, we are requesting two track-length detectors, to get the average fluence in the upstream
half and in the downstream half of the target, respectively.
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
* Track-length fluence inside the target, Upstream part and Downstream part
* Logarithmic energy intervals
USRTRACK -1.0 209.0 -48.0 3.0 1000.0 20. piFluenU
USRTRACK 50.0 0.001 &
USRTRACK -1.0 209.0 -49.0 4.0 1000.0 20. piFluenD
USRTRACK 50.0 0.001 &
Beginner’s guide 21
The volume input is 20 × 20 × 2.5 = 1000 cm3 . We are requesting an energy spectrum in 20 logarithmic
intervals between 0.001 and 50 GeV. In this case, we ask that the corresponding output be printed, unfor-
matted, on two different files.
In a name-based input, the above example could be:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
USRTRACK -1.0 PIONS+- -48.0 UpstrBe 1000.0 20. piFluenU
USRTRACK 50.0 0.001 &
USRTRACK -1.0 PIONS+- -49.0 DwnstrBe 1000.0 20. piFluenD
USRTRACK 50.0 0.001 &
Option USRBIN provides detailed space distributions of energy deposition, star density or integrated fluence
(not energy fluence, unless by writing a special user routine). Using some suitable graphics package, USRBIN
output (see p. 249) can be presented in the form of colour maps. Programs for this purpose are available
in the $FLUPRO/flutil directory (Pawlevbin.f and various kumac files), and on the Fluka website www.
fluka.org.
USRBIN results are normalised to bin volumes calculated automatically by the program (except in the
case of region binning and special 3-variable binning which are only seldom used).
The binning structure does not need to coincide with any geometry region. In our example we propose
to ask for two Cartesian space distributions, one of charged pion fluence and one of total energy deposited.
The first will extend over a volume larger than the target, because fluence can be calculated even in vacuum.
Energy deposition, on the other hand, will be limited to the target volume.
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
* Cartesian binning of the pion fluence inside and around the target
USRBIN 10.0 209.0 -50.0 50.0 50.0 50. piFluBin
USRBIN -50.0 -50.0 -10.0 100.0 100.0 60.0 &
* Cartesian binning of the deposited energy inside the target
USRBIN 10.0 208.0 -51.0 10.0 10.0 5. Edeposit
USRBIN -10.0 -10.0 0.0 20.0 20.0 5.0 &
Also in this case, the request is for output on two separate files.
Or, using names:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
USRBIN 10.0 PIONS+- -50.0 50.0 50.0 50. piFluBin
USRBIN -50.0 -50.0 -10.0 100.0 100.0 60.0 &
USRBIN 10.0 ENERGY -51.0 10.0 10.0 5. Edeposit
USRBIN -10.0 -10.0 0.0 20.0 20.0 5.0 &
Angular yields around a fixed direction of particles exiting a given surface can be calculated using option
USRYIELD (see p. 265). The results are double-differential distributions with respect to a pair of variables,
one of which is generally energy-like (kinetic energy, momentum, etc.) and the other one angle-like (polar
angle, rapidity, Feynman-x, etc.) Distributions in LET (Linear Energy Transfer) can also be requested by
this option. An arbitrary normalisation factor can be input.
Another commonly used scoring option is RESNUCLEi (see p. 218), which calculates residual nuclei
production in a given region. A normalisation factor (usually the region volume) can be input.
A detailed summary of the requested detectors is printed on standard output. The same information
is printed in the same format in estimator ASCII output files, and is available in coded form in estimator
unformatted output files.
The random number sequence used in a run is initialised by default by the seeds contained in file random.dat
provided with the code. To calculate the statistical error of the results, it is necessary to perform other
22 Sample input
independent runs (at least 4 or 5), each with a different independent initialisation, using the seeds written
by the program at the end of each run. The rfluka script provided with the code on UNIX and LINUX
platforms takes care of this task, provided the following card is issued in the input file:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
RANDOMIZe 1.0 0.0
The seeds of the random number generator are printed on a special file in hexadecimal form at the end of
each group of histories (the size of a group depends on the number of histories requested in the START card).
Instead of getting the seeds from the last run, it is also possible to initialise directly another indepen-
dent random number sequence by setting the second RANDOMIZE parameter equal to a different number,
for instance:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
RANDOMIZE 1.0 1198.
At the end of the input file, a START card (see p. 231) is mandatory in order to actually start the calculation.
That card must indicate also the number of particle histories requested. The run, however, may be completed
before all the histories have been handled in two cases: if a time limit has been met (on some systems) or if
a “stop file” is created by the user (see instructions in Note 2 to option START).
The START card is optionally followed by a STOP card. For example, if the user wants to generate
100000 histories, the input file can be closed with the following two cards:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
START 100000.0
STOP
In summary, the input file for our basic example (example.inp) name-based and written in fixed format,
could be the following:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
TITLE
Charged pion fluence inside and around a proton-irradiated Be target
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
BEAM 50.E+00 PROTON
BEAMPOS 0.0 0.0 -50.0
*
GEOBEGIN COMBNAME
A simple Be target inside vacuum
RPP blakhole -5000000.0 +5000000.0 -5000000.0 +5000000.0 -5000000.0 +5000000.0
RPP vacumbox -1000000.0 +1000000.0 -1000000.0 +1000000.0 -100.0 +1000000.0
RPP betarget -10.0 +10.0 -10.0 +10.0 0.0 +5.0
* plane to separate the upstream and downstream part of the target
XYP cutplane 2.5
END
* black hole
Blckhole 5 +blakhole -vacumbox
* vacuum around
Vacarund 5 +vacumbox -betarget
* Be target 1st half
UpstrBe 5 +betarget +cutplane
* Be target 2nd half
DwnstrBe 5 +betarget -cutplane
Beginner’s guide 23
END
GEOEND
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
MATERIAL 4.0 0.0 1.848 5.0 BERYLLIU
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
* Be target, 1st and 2nd half
ASSIGNMAT BERYLLIU UpstrBe DwnstrBe
* External Black Hole
ASSIGNMAT BLCKHOLE Blckhole
* Vacuum
ASSIGNMAT VACUUM Vacarund
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
* e+e- and gamma production threshold set at 10 MeV
EMFCUT -0.010 0.010 1.0 BERYLLIU PROD-CUT
* score in each region energy deposition and stars produced by primaries
SCORE ENERGY BEAMPART
* Boundary crossing fluence in the middle of the target (log intervals, one-way)
USRBDX 99.0 PIONS+- -47.0 UpstrBe DwnstrBe 400. piFluenUD
USRBDX +50.0 +50.0 0.0 10.0 &
* Boundary crossing current in the middle of the target (log intervals, one-way)
USRBDX -1.0 PIONS+- -47.0 UpstrBe DwnstrBe 400. piCurrUD
USRBDX +50.00 +50.0 0.0 10.0 &
* Tracklength fluence inside the target, Upstream part and Downstream part
* Logarithmic energy intervals
USRTRACK -1.0 PIONS+- -48.0 UpstrBe 1000.0 20. piFluenU
USRTRACK 50.0 0.001 &
USRTRACK -1.0 PIONS+- -49.0 DwnstrBe 1000.0 20. piFluenD
USRTRACK 50.0 0.001 &
* Cartesian binning of the pion fluence inside and around the target
USRBIN 10.0 PIONS+- -50.0 50.0 50.0 50. piFluBin
USRBIN -50.0 -50.0 -10.0 100.0 100.0 60.0 &
* Cartesian binning of the deposited energy inside the target
USRBIN 10.0 ENERGY -51.0 10.0 10.0 5. Edeposit
USRBIN -10.0 -10.0 0.0 20.0 20.0 5.0 &
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
RANDOMIZe 1.0
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
START 100000.0
STOP
The same input file, number-based and using the free format option for the FLUKA commands, but not for
the geometry, can instead be written as follows:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
TITLE
Charged pion fluence inside and around a proton-irradiated Be target
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
GLOBAL 2.0
BEAM 50.E+00 0. 0. 0. 0. 0. PROTON
BEAMPOS 0. 0. -50.0 0. 0. 0. ’ ’
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
GEOBEGIN COMBINAT
A simple Be target inside vacuum
RPP 1-5000000.0+5000000.0-5000000.0+5000000.0-5000000.0+5000000.0
RPP 2-1000000.0+1000000.0-1000000.0+1000000.0 -100.0+1000000.0
RPP 3 -10.0 +10.0 -10.0 +10.0 0.0 +5.0
XYP 4 2.5
END
* black hole
BH1 5 +1 -2
* vacuum around
VA2 5 +2 -3
24 Running Fluka
Other possible combinations are name-based free format and number-based fixed format.
As mentioned above, the rfluka script in $FLUPRO/flutil should be used to drive the Fluka run. In the
following it is supposed that the user is going to ask for five statistically independent runs, each one made
of 100000 histories, of the proposed basic example. The command is:
$FLUPRO/flutil/rfluka -N0 -M5 example &
(on LINUX/UNIX, the & character allows to run the program in the background without “freezing” the
terminal window).
The script creates a temporary subdirectory called fluka nnnn where nnnn is the number of the
subprocess. For instance, when the first run (or “cycle”) starts, the user will see on the terminal lines similar
to the following ones:
Beginner’s guide 25
$TARGET_MACHINE = Linux
$FLUPRO = /home/user/flupro
2789: old priority 0, new priority 10
At the end of each cycle the output files will be copied onto the running directory, the temporary
directory will be erased and a new one will be created where the next run will take place. The names of the
output files from each run are built by appending to the input file name the run number and an extension
depending on their content: .out for the standard output, .err for the error file, .log for the log file and
fort.nn for the estimator files (with nn = absolute value of the selected output unit). The file containing
the random number seeds will be called ran<input file name><run number>.
The error file may contain error messages related to the event generators (for instance when the
program does not manage to conserve exactly energy or another quantity) or to the geometry tracking.
Most of those are generally only warning messages which can be ignored unless there is a large number of
them.
The log file generally contains messages related to fatal errors (input errors, overflow, etc.)
During a multiple run, lines like the following will appear on the user’s screen:
================================ Running FLUKA for cycle # 1 ====================
Removing links
Removing links
.....
.....
Removing links
At this time, in the working directory, the following new files exist:
In Chapter 9 the user can find a comprehensive description of the content of the Fluka standard
output. For the purpose of this beginner’s guide, it can just be pointed out that, according to the content
of the USRBDX command, the files with extension fort.47 contain, in binary form, the boundary crossing
estimator output for the required pion fluence and current detectors (for more details see 9.5.5). These files
must be combined together to produce a table of values with their statistical errors which can be easily
interfaced by the user to some analysis codes and/or graphic visualisation tools. Similarly, the files with
extension fort.48 and fort.49 will contain the track-length estimator output, and those with extension
fort.50 and fort.51 the output from USRBIN.
Binary files from the USRBDX estimator can be accessed by means of the Usxsuw.f readout code, which is
located in the $FLUPRO/flutil directory.
That readout code can be easily compiled. For example, the same compiling and linking Fluka tools
can be used for this purpose:
cd $FLUPRO/flutil
./lfluka usxsuw.f -o usxsuw
The simplest way, however, is to use the makefile which is available in the $FLUPRO/flutil directory. In
that directory, just type:
make
and all the post-processing utilities will be compiled and linked.
In order to process the 5 output files produced by the proposed example, the following interactive
procedure can be used:
Beginner’s guide 27
cd /home/user/flukawork
$FLUPRO/flutil/usxsuw
The readout code will ask for the first Fluka detector file name:
Type the input file:
For each detector file the program will show the content of the TITLE card of the Fluka input file,
the date and time of the Fluka run and the number of histories for the given run.
The request will be iterated until a blank line is given. This will be interpreted as the end of the
list of files, and then a name for the output file prefix will be requested. Let’s use, for example, the prefix
pionbdx:
Type the input file: example001_fort.47
Charged pion fluence inside and around a proton-irradiated Be target
DATE: 7/15/ 5, TIME: 16:22:11
100000.
100000
Type the input file: example002_fort.47
Charged pion fluence inside and around a proton-irradiated Be target
DATE: 7/15/ 5, TIME: 16:23: 3
100000.
100000
Type the input file: example003_fort.47
Charged pion fluence inside and around a proton-irradiated Be target
DATE: 7/15/ 5, TIME: 16:23:54
100000.
100000
Type the input file: example004_fort.47
Charged pion fluence inside and around a proton-irradiated Be target
DATE: 7/15/ 5, TIME: 16:24:51
100000.
100000
Type the input file: example005_fort.47
Charged pion fluence inside and around a proton-irradiated Be target
DATE: 7/15/ 5, TIME: 16:25:45
100000.
100000
Type the input file:
Type the output file name: pionbdx
The first one (pionbdx) is again a binary file that can be read out at any time by Usxsuw. The
content of this file is statistically equivalent to that of the sum of the files used to obtain it, and it can
replace them to be combined with further output files if desired (the Usxsuw program takes care of giving
it the appropriate weight).
The other two files are ASCII text files.
Let us first examine pionbdx sum.lis. This contains many comments which can help the user to
understand the results. Since by means of the USRBDX command separate detectors for pion fluence and
current have been requested, with their output on the same logical unit, there will be two different sections
in the file, identified by the word “Detector”: Detector no. 1 is for fluence and Detector no. 2 is for current,
because this is the order in which the USRBDX commands have been given.
28 Results
Detector n: 1( 1) piFluenUD
(Area: 400. cmq,
distr. scored: 209 ,
from reg. 3 to 4,
one way scoring,
fluence scoring)
The total (summed) number of primaries (histories) is reported at first, then the main features of
USRBDX request are summarised. The following numbers represent the energy and angle integrated fluence
(“total response”).
Here and later, the statistical error is always expressed in percentage.
After this heading, the differential fluence tabulation as a function of (pion) energy, and integrated
over solid angle, is reported, starting with the boundaries of the energy bins. As a general convention, these
values are given from the highest to the lowest value:
**** Different. Fluxes as a function of energy ****
**** (integrated over solid angle) ****
Flux (Part/GeV/cmq/pr):
Soon after, the cumulative fluence distribution as a function of energy is also given:
**** Cumulative Fluxes as a function of energy ****
**** (integrated over solid angle) ****
The numbers for the cumulative distribution have been obtained by multiplying each value of the
differential distribution by the corresponding energy bin width (variable if the distribution is logarithmic as
30 Results
in our example). The integral fluence in any given energy interval can be obtained as the difference between
the values of the cumulative distribution at the two bounds of that interval.
Since more than one angular interval was requested, at this point the angular distribution with respect
to the normal at the boundary crossing point is reported, both in steradians and in degrees:
**** Double diff. Fluxes as a function of energy ****
Let us take for instance the energy bin from 0.345 GeV to 0.278 GeV:
Flux (Part/sr/GeV/cmq/pr):
Flux (Part/deg/GeV/cmq/pr):
Detector n: 2( 2) piCurrUD
(Area: 400. cmq,
distr. scored: 209 ,
from reg. 3 to 4,
one way scoring,
current scoring)
and so on.
Note that in this case the ratio between the calculated fluence (8.690E-04) and the corresponding current
(7.169E-04) is about 1.2. The ratio between the numerical values of the two quantities would be 1 if the
pions were all crossing the boundary at a right angle, 2 in the case of an isotropic distribution, and could
even tend to infinity if the particle direction were mainly parallel to the boundary:
Beginner’s guide 31
Fluence and current are very different quantities and should not be confused!
Note also that the above output reports also the current value not normalised per unit area. This is equivalent
to a simple count of crossing particles, so we see that in our example about 0.287 charged pions per primary
proton cross the middle plane of the target.
The previous file has a structure which is not easily interfaceable to other readout codes. This can
be easily achieved by means of the other output file, pionbdx tab.lis: there the user can find, for each
Detector, a simple 4-column structure for the differential fluence integrated over solid angle. The table starts
from the lowest energy, and the four columns represent respectively Emin , Emax , the differential fluence and
the statistical error in percentage:
# Detector n: 1 piFluenUD (integrated over solid angle)
# N. of energy intervals 50
1.000E-03 1.242E-03 1.300E-04 5.216E+01
1.242E-03 1.542E-03 1.631E-04 8.206E+01
1.542E-03 1.914E-03 1.025E-04 9.900E+01
1.914E-03 2.376E-03 8.693E-05 6.203E+01
2.376E-03 2.951E-03 7.245E-05 3.355E+01
2.951E-03 3.663E-03 7.420E-05 3.368E+01
3.663E-03 4.548E-03 2.109E-04 3.472E+01
4.548E-03 5.647E-03 1.567E-04 4.401E+01
5.647E-03 7.012E-03 2.927E-04 3.929E+01
.....
By convention, when in a given bin the statistics is not sufficient to calculate a standard deviation, the
statistical error is printed as 99%. For a null fluence the statistical error is also null.
The program to analyse USRTRACK binary output is called ustsuw.f and can also be found in
$FLUPRO/flutil. Its use is very similar to that of usxsuw.f described above. Applying it to the
example00* fort.48 files (output of the first USRTRACK detector in our example), we obtain for the average
fluence of charged pions in the upstream half of the beryllium target:
Tot. response (p/cmq/pr) 5.4765277E-04 +/- 0.6965669 %
and from the example00* fort.49 files (pion fluence in the downstream half):
Tot. response (p/cmq/pr) 1.3474772E-03 +/- 0.5352812 %
As it was to be expected, the average fluence obtained above by the boundary crossing estimator on
the middle surface (8.69E-04 cm−2 ) has a value which is intermediate between these two.
32 Various settings
To analyse the binary output from USRBIN, two programs are needed, both available in $FLUPRO/flutil.
The first, usbsuw.f, performs a statistical analysis of the results and produces a new unformatted file, with
a name chosen by the user. The second program, Usbrea.f, reads the latter file and writes on a formatted
file two arrays, namely the content of each bin, averaged over the given number of runs, followed by the
corresponding errors in percent. The second USRBIN detector defined in example.inp gives the following
values of energy deposition (in GeV/cm3 ):
1
Cartesian binning n. 1 "Edeposit " , generalized particle n. 208
X coordinate: from -1.0000E+01 to 1.0000E+01 cm, 20 bins ( 1.0000E+00 cm wide)
Y coordinate: from -1.0000E+01 to 1.0000E+01 cm, 20 bins ( 1.0000E+00 cm wide)
Z coordinate: from 0.0000E+00 to 5.0000E+00 cm, 5 bins ( 1.0000E+00 cm wide)
Data follow in a matrix A(ix,iy,iz), format (1(5x,1p,10(1x,e11.4)))
The $FLUPRO/flutil directory contains two other programs similar to usxsuw.f and ustsuw.f to average
the outputs from other Fluka estimators and binnings:
Even when one set of defaults is enforced, the user can still override some of them by modifying
single parameters. The most used ones concern energy cutoffs (option EMFCUT for electrons and photons,
LOW–BIAS for low-energy neutrons, PART–THRes for all other particles, described respectively on page 113,
154 and 196), thresholds for delta-ray production (option DELTARAY, p. 98), particles to be ignored (option
DISCARD, p. 104), switching on or off some physical effect (EMFCUT, IONFLUCT, MUPHOTON, PAIRBREM,
PHOTONUC, PHYSICS, POLARIZAti: see respectively p. 113, 145, 178, 194, 198, 202, 212), and (more
rarely) the size of the step for transporting charged particles (FLUKAFIX, MCSTHRESh, MULSOPT, STEPSIZE,
see respectively p. 133, 170, 174, 232).
Energy cutoffs for each particle are listed in a table on standard output
(Particle transport thresholds).
Beginner’s guide 33
2.7 Biasing
Although Fluka is able to perform fully analogue particle transport calculations (i.e., to reproduce faithfully
actual particle histories), in many cases of very non-uniform radiation fields, such as those encountered in
shielding design, only a very small fraction of all the histories contributes to the desired response (dose,
fluence) in the regions of interest, for instance behind thick shielding. In these cases, the user’s concern is
not to simulate exactly what occurs in reality, but to estimate in the most efficient way the desired response.
This can be obtained by replacing the actual physical problem with a mathematically equivalent one, i.e.,
having the same solution but faster statistical convergence.
– In the limit of the number of histories tending to infinity, the value of each calculated quantity tends
exactly to the same average in the analogue and in the corresponding biased calculation. In other
words, biasing is mathematically correct and implies no approximation. However, an acceleration of
convergence in certain regions of phase space (space/energy/angle) will generally be paid for by a
slower convergence in other regions.
Because an actual calculation does not use an infinite number of particles, but is necessarily truncated
after a finite number of histories, results must be considered with some judgment. For instance, if the
number of histories is too small, it can happen that total energy is not conserved (check the energy
budget summary at the very end of main output!)
– A bad choice of biasing parameters may have consequences opposite to what is desired, namely a slower
convergence. A long experience, and often some preliminary trial-and-error investigation, are needed
in order to fully master these techniques (but some biasing algorithms are “safer” than others).
– Because biasing implies replacing some distributions by others having the same expectation value
but different variance (and different higher moments), biasing techniques in general do not conserve
correlations and cannot describe correctly fluctuations.
The simplest (and safest) biasing option offered by Fluka is importance biasing, which can be re-
quested by option BIASING (p. 80). Each geometry region is assigned an “importance”, namely a number
between 10−4 and 104 , proportional to the contribution that particles in that region are expected to give to
the desired result. The ratio of importances in any two adjacent regions is used to drive a classical biasing
algorithm (“Splitting” and “Russian Roulette”). In a simple, monodimensional attenuation problem, the
importance is often set equal to the inverse of the expected fluence attenuation factor for each region.
In electron accelerator shielding, two other biasing options are commonly employed: EMF–BIAS (p. 108)
and LAM–BIAS (p. 150). The first one is used to request leading particle biasing, a technique which reduces
considerably the computer time required to handle high-energy electromagnetic showers. With this option,
CPU time becomes proportional to primary energy rather than increasing exponentially with it. Option
LAM–BIAS is necessary in order to sample with acceptable statistics photonuclear reactions which have a
much lower probability than competing electromagnetic photon reactions, but are often more important
from the radiological point of view.
Other important options are those which set weight window biasing (WW–FACTOr (p. 270)
WW–THRESh (p. 275) and WW–PROFIle (p. 273), but their use requires more experience than assumed
here for a beginner.
Particle importances, weight windows and low-energy neutron biasing parameters are reported for each
region on standard output. On user’s request (expressed as SDUM = PRINT in a BIASING card), Russian
Roulette and Splitting counters are printed for each region on standard output before the final summary.
Such counters can be used for a better tuning of the biasing options.
34 Flair
2.8 Flair
Flair [92] is an advanced user interface for Fluka to facilitate the editing of Fluka input files, execution
of the code and visualization of the output files. It is based entirely on python and Tkinter (http://wiki.
python.org).
Flair provides the following functionality:
1. front-end interface for an easy and almost error free editing as well as validation and error correction,
of the input file during editing
2. interactive geometry editor, allowing to editing bodies and regions in a visual/graphical way with
immediate debugging information
6. python API for manipulating the input files, post processing of the results and interfacing to gnuplot
7. import/export to various formats (MCNP, povray, dxf, bitmap-images)
The philosophy of Flair is to work on an intermediate level of user interface. Not too high, that hides the
inner functionality of Fluka from the user, and not so low that the user is in constant need of the Fluka
manual to verify the options needed for each card. Flair works directly with the input file of Fluka and is
able to read/write all acceptable Fluka input formats. Inside the Flair editor the user is working directly
with the Fluka cards having a small dialog for each card that displays the card information in an interpreted
human readable way. The only exception is that the cards in Flair are called “extended cards” where each
card is not composed only by 6 WHATs and 1 SDUM, but rather it contains all related information in one
unit (comments preceding the card, continuation cards, titles, etc).
Installation
3.1 Requirements
Fluka is available at the moment only for x86 and x86 64 Linux systems. The Fluka package can be
downloaded from the Fluka website www.fluka.org. This version of the code should be run on the platforms
for which it has been released, which are:
The Linux x86 version must be compiled at 32 bits with g77 but can run on both 32 and 64 bit
machines while the Linux x86 64 version must be compiled with gfortran and works only on 64 bits
machine. The latter is still tentative, we cannot exclude some issues with that version. The code has been
checked and validated for these platforms/compilers only for the time being.
The availability of the source code (available under the license reported at the beginning of this
volume) shall not be exploited for tentative builds on other architectures or with different compilers/compiler
options than the ones recommended by the development team. Our experience shows that for a code of the
complexity of Fluka the chances of hitting one or more compiler issues are very large. Therefore users shall
not make use for every serious task, including whichever form of publication or presentation, of code versions
built on platforms and/or with compiler options which have not been cleared as safe by the development
team.
We distribute a package containing compiled libraries, user routines in source form, INCLUDE files,
various unformatted and formatted data files and a number of scripts for compiling ($FLUPRO/flutil/fff),
linking ($FLUPRO/flutil/lfluka) and running ($FLUPRO/flutil/rfluka) the program on a given plat-
form. A list of the contents is provided in a README file, and information on the current version, possibly
overriding parts of the present manual, may be contained in a file RELEASE-NOTES. No external library
routines are required. The timing and other necessary service routines are already included.
– FLUPRO: pointing to the directory where the distribution tar file will be decompressed. If you
work on bash do:
> export FLUPRO=/pathto/fluka
or if you work on csh or tcsh do:
35
36 Installation instructions
If the directory does not exist you should first create it by:
> mkdir /pathto/fluka
– FLUFOR (optional): containing the compiler type (gfortran or g77) which must be coherent with
the architecture of the package you downloaded. If you work on bash do:
> export FLUFOR=g77 or > export FLUFOR=gfortran
or if you work on csh or tcsh do:
> setenv FLUFOR g77 or > setenv FLUFOR gfortran
In case FLUFOR is not set the script does an attempt of checking if the name of the $FLUPRO
directory contains the gfor string. In this case gfortran is selected. If the FLUFOR variable is
not set and the compiler platform name is not coded in the directory name g77 is selected.
– GFORFLU (optional): set to specify the specific version of gfortran to be used if more than one
is available (i.e. if on your machine gfortran points to a version < 4.6, and gfortran46 points
to version 4.6, you can set GFORFLU to gfortran46 and happily use the Fluka gfortran 64 bits
version).
Note: Such definition of the environment variables are lost when you log out. In order to make it
permanently available you should add the export/setenv command to your shell configuration file:
– If you work on bash add them to your .bashrc file;
– If you work on csh (or tcsh add them to your .cshrc (or .tcshrc) file;
2. Go to the FLUPRO directory and move there the selected Fluka package
> cd $FLUPRO
> cp /path/fluka2011.2-linuxAA.tar.gz ./
or
> cp /path/fluka2011.2-linux-gfor64bitAA.tar.gz ./
We also distribute a RPM package which can be installed in principle on each x86 RPM-enable distribution
(Fedora, Redhat, Ubuntu, SuSe ..etc). The package requires the compat-libf2c-34 and compat-gcc-34
packages.
– In order to install the RPM on the Fedora/Redhat just type (as root):
> rpm -ivh fluka2011.2-i686.rpm
– Other x86 RPM-enabled distributions might have different name for the compat-libf2c-34 and
compat-gcc-34’ packages. In this case you might have to install them separately and then run (as
root):
> rpm -ivh --nodeps fluka2011.2-i686.rpm
Package content 37
(*) XXX stands for hp, ibm, linux, osf etc. depending on the platform. Most UNIX Fortran compilers
require that the extension .for be replaced by .f (but the makefile provided with Fluka takes care of this,
see below).
If the source code is present, the INCLUDE files needed to compile the program may be grouped into
three files emfadd.add, flukaadd.add and lowneuadd.add.
A makefile and a number of auxiliary programs split these files into individual routines and INCLUDE
files, which are placed in 29+1 separate directories and compiled. The object files are inserted in a FLUKA
library libflukahp.a. A shell script lfluka links all routines into an executable flukahp (the name is the
same for all UNIX platforms, the ”hp” being due to historical reasons).
The Dpmjet and Rqmd object files are collected in two separate libraries.
The Fluka distribution tar file normally does not contain an executable file. To create the default Fluka
executable, type:
$FLUPRO/flutil/lfluka -m fluka
User-written routines (in particular a SOURCE subroutine, see list of user interface routines in
Chap. 13) can be compiled separately and linked overriding the default routines of the library. The
$FLUPRO/flutil/lfluka script can take care of them in three different ways:
1. appending the Fortran files (xxx.f) as last arguments to the lfluka procedure (Linux only);
2. appending the object files (precompiled using the $FLUPRO/flutil/fff procedure supplied with the
code) as last arguments to the lfluka procedure;
1 The form ...vax.for has historical reasons. Actually, as seen later, the files are automatically split by a makefile into
3. inserting the object files into a library and giving the library name to the script with the -O or -l
options.
The program needs several auxiliary data files, containing cross sections, nuclear and atomic data and
so on. Nine of these files are unformatted and have an extension .bin (or .dat).
The auxiliary files are generally kept in the main Fluka directory and require no modification by the
user.
The $FLUPRO/rfluka script supplied with the code contains all relevant I/O file definitions, and can
be used to run the code interactively or through a batch queue. It allows to submit multiple runs with a
single command. Both rfluka and lfluka (the script used for linking, see above) contain usage instructions.
The rfluka script creates a temporary directory where it copies the necessary files and deletes it after
the results have been copied back to the parent directory, thus allowing to run more than one job at the
same time in the same directory. Appropriate names for the output files are generated by rfluka, including
a sequential number for each run.
If user routines are linked and a new executable is created, the name of the new executable can be
input using the -e option. Some on-line help is available issuing rfluka -h.
Chapter 4
BAMJM : new hadronisation package (strongly improved version of the original Bamjet) [177]
BLOCKM : BLOCK DATA routines
BMEM : the Boltzmann Master Equation nucleus-nucleus interaction package for nucleus-nucleus
interactions below 150 MeV/u
COMLATM : all geometry routines specific for Combinatorial Geometry (CG) and repetition (lattice
capability) in the geometry description
DECAYM : all routines connected with particle decays during transport, including those managing
polarisation or non phase-space-like decays for pions, kaons, muons and taus
DEDXM : all routines connected with dE/dx and radiative losses of hadrons and muons, including
delta-ray production and cross sections for muon photonuclear interactions
DPMM : interface routines for the Dpmjet generator
DUM : dummy routines
ELSM : all hadron and photon (photonuclear) cross section routines
EMFM : all routines dealing with electron, positron and photon transport and interactions (except
for photon photonuclear interactions, fluorescent X-ray and Auger production)
EVENTQM : auxiliary routines for the high energy hadronic interaction generators
EVENTVM : all routines (besides those in EVENTQM) connected with the high energy hadronic inelastic
interaction package
EVFFRM : separate module with all evaporation, fission and Fermi break-up routines
FLUOXM : all routines dealing with fluorescence X-ray and Auger production
GCRM : cosmic ray and solar flare simulation package
GEOLATM : geometry navigation and debugging routines
KASKADM : general event steering, most of the relevant transport routines for hadron and muon trans-
port, including magnetic field tracking, most of material and region dependent initialisation
and source routines
LOWNEUM : all routines concerning the multigroup treatment of “low” energy (E < 20 MeV) neutrons
MAINM : main, input parsing and auxiliary routines
MATHM : mathematical auxiliary routines (interpolation, integration, etc.). Many of them adapted
from SLATEC (http://www.netlib.org/slatec)
NEUTRIM : nuclear interactions of neutrinos
NOPTM : all routines connected with new scoring options implemented after Fluka86, and blank
COMMON setting for scoring options.
NUNDISM : neutrino-nucleon deep inelastic scattering
NUNRESM : neutrino-nucleon resonance production
OPPHM : optical photon production and transport
OUTPUTM : printing routines (apart from output of new options which is performed in NOPTM)
PEMFM : electromagnetic initialisation
PGM : Plotgeom geometry drawing package
PRECLM : full Peanut second part
PREEQM : full Peanut first part
PRIPROM : initialisation and drivers for Peanut
RNDM : random number generation, including gaussian-distributed random numbers
RQMDM : interface routines for the Rqmd generator
USERM : user oriented routines (see list below)
IBMM/HPM/LINUXM/OSFM/VAXM : timing and “environment” routines. These are machine specific.
39
40 Fluka modules
The ”FLUKA User Routines” mentioned at point 3) in the FLUKA User License are those
(and only those) contained in the directory usermvax, both in the source and binary versions
of the code.
User Guide
Chapter 5
The identifier values are reported in Table 5.1 together with the corresponding particle numbering
scheme of the Particle Data Group [142].
43
44 Particle codes
Table 5.2: Fluka generalised particles (to be used only for scoring)
(5) “Non Ionizing Energy Loss deposition” describes the energy loss due to atomic displacement (recoil nucleus) as a particle
traverses a material. The “Restricted NIEL deposition” gives the same energy loss but restricted to recoils having an
energy above the damage threshold defined for each material with the use of MAT–PROP with SDUM = DPA–ENER.
(6) “Dose equivalent” is computed using various sets of conversion coefficients (see AUXSCORE for details) converting
particle fluences into Ambient Dose equivalent or Effective Dose. Dose Equivalent of particles for which conversion
coefficients are not available, typically heavy ions, can be calculated by scoring generalised particle DOSEQLET.
(7) “Dose equivalent” is computed using the Q(LET ) relation as defined in ICRP60, where LET is LET∞ in water.
(8) ”High energy hadron equivalent fluence” is proportional to the number of Single Event Upsets (SEU’s) due to hadrons
with energy > 20 MeV. Unstable hadrons (but neutrons) of lower energies are also counted. Neutrons of lower energies
are weighted according to the ratio of their SEU cross section to the one of > 20 MeV hadrons (substantially reflecting
the (n,α) cross section behaviour in different microchip materials)
(9) ”Thermal neutron equivalent fluence” is proportional to the number of SEU’s due to thermal neutrons. Neutrons of
higher energies are weighted according to the ratio of their capture cross section to the one of thermal neutrons (following
the 1/v law) [182, 183].
However, for user’s convenience, 25 common single-element materials are already pre-defined (see
Table 5.3): they are assigned a default density, name and code number even if no MATERIAL definition has
been given. The user can override any of these if desired (but cannot change the code number), and can add
more material definitions by means of one or more MATERIAL cards. The only constraints are:
1. the number sequence of the defined materials must be uninterrupted, i.e., there may not be any gap in
the numbering sequence from 25 onwards. If the input is name-based, omitting the material number
is the easiest way to ensure this, since the code will assign a correct number automatically.
2. if one of the pre-defined materials is re-defined using the same name, its code number (if expressed
explicitely) must be equal to that of the pre-defined material.
Note that the above constraints can be ignored if the input is name-based and the material number in the
MATERIAL option is left blank. In that case, the material name must be used in all relevant command (e.g.
ASSIGNMAt, COMPOUND).
In addition to the 25 pre-defined single-element materials, some pre-defined compounds are available.
For them, the stopping power of charged particles is not calculated directly from the component elements by
the Bragg formula, but the Sternheimer parameters and the ionisation potential recommended by ICRU [109]
are applied. Composition is also that recommended by ICRU. Reference to these pre-defined compounds is
normally by name and no MATERIAL and COMPOUND cards are needed: if the input is explicitly number-
based only (via command GLOBAL), a number needs to be assigned to them using a MATERIAL card, of
course leaving no gaps in the numbering sequence.
If a user defines a compound with the same name and similar composition, the code automatically
“matches” its stopping power parameters to those of the pre-defined one. It is also possible to modify the
Sternheimer parameters with command STERNHEIme and the ionisation potential with MAT–PROP.
Pre-defined materials 47
where:
Since 2006, a very practical and appealing feature has been introduced: input by names. This
means that the numeric WHAT fields can be filled with pre-defined or user-defined names, such as:
– material names
– particle or generalised particle names
– region names, if the geometry too is written in name format (see p. 137)
– estimator names
– detector/binning names
Names must be at most 8 character long with the exception of detector names (estimator options USRBDX,
USRTRACK, USRCOLL, USRBIN, USRYIELD, RESNUCLEi) which can be 10 character long. Leading and trailing
blanks are automatically stripped, and the input parser is case sensitive.
A special name (@LASTMAT, @LASTPAR, @LASTREG) can be used corresponding to the largest material
number, particle id, and region number respectively.
Name values and numeric values can both be used in the same input file, since the program is able to
distinguish a numeric field from a character field. For this reason, names that can be interpreted as numbers
must be avoided. This means that old numeric inputs need no modification. Fully-numeric interpretation,
however, can be forced by means of the GLOBAL card (p. 141).
Due to the introduction of input by names, input data cards are no longer interpreted in the same
order as in the input file, therefore the echo on standard output will look different from the original input.
When using numeric fields, note that even if the values to be assigned to WHAT-parameters were
logically integers, because of the format used they must be given with a decimal point.
The order of the input cards is almost free, with the following exceptions:
49
50 Fluka input
– The START command initiates execution. While old versions of Fluka were allowing multiple re-
starts, only the first START command is executed now. Thus any input given after START is ignored,
with the exception of USROCALL and STOP.
– The STOP command stops the execution of the program. Thus any input present after STOP is ignored.
– Some option cards must or can be immediately followed by a variable amount of information, not
always in the standard format indicated above. These are:
OPEN is generally followed by the name of the file to be opened (scratch files are an exception). See
p. 181.
PLOTGEOM:
Unless a different logical input unit is specified, the call to the Plotgeom program must be followed
immediately by the Plotgeom input, in special format (see p. 209).
TITLE:
the card following the TITLE command is considered as the title of the run and is reproduced in the
output (see p. 240).
– for old, fully numeric input format only:
– In some cases, the MAT–PROP option must be requested after the corresponding MATERIAL card.
– The PLOTGEOM command must be issued after the geometry input, and, in case the user chooses
to plot only boundaries between different materials, it must come also after all the ASSIGNMAt
cards.
It is also recommended that PLOTGEOM be issued before any biasing and any other option which
makes use of permanent and/or temporary storage.
Most definitions have some default values. If these are acceptable, it is not compulsory that the corresponding
option card appear explicitly in the input sequence. Furthermore for most WHAT and/or SDUM parameters
a default value (that may be different from the default value when the definition has not been input) is
applied if the corresponding field is left blank (or set = 0.0) in the input card.
Several option cards may appear more than once in the input sequence. In most cases, each of such
additional cards obviously adds more definitions to those already given, provided they are different and not
contradictory. In case of conflict, the last given generally overrides the previous one(s). This feature may be
successfully exploited in the numerous cases where whole arrays are assigned according to the scheme
making the input more compact. An example can be found in the description of option ASSIGNMAt (p. 66,
Note 2), which is used to set a one-to-many correspondence between material numbers and region numbers.
In most cases of such “DO-loop” assignments, especially when the same option card can be used to
assign a value to more than one quantity, a blank or zero field does not assign the default value but leaves
the previously given value unchanged. To remove any possible ambiguity, resetting the default value needs
then to be done explicitly (generally -1. has to be input in such cases).
“DO-loop” assignments can be used also when the input is name-based, since the program replaces each
name by the corresponding numerical index. The correspondence can be found by examining the output
The preprocessor 51
from a short test run: however, it must be remembered that adding a new material, or a new region, will
change the numerical sequence unless the new item is issued as the last of material or region definitions.
In this case, if the “DO-loop” indicates all materials, or all regions, using the generic names @LASTMAT and
@LASTREG makes a modification of the assignment definition unnecessary.
All defaults and exceptions are listed under the description of each Fluka input option. Different
defaults, tuned to the type of application of interest, can be specified using the option DEFAULTS (p. 92).
6.1.1 Syntax
All preprocessor directives are single lines starting with the # character in the first column and can appear
anywhere in the input file, either between normal input cards or inside the geometry definition (inline or
externally defined). Each identifier can be up to 40 characters in length.
With the definition directives one can define identifiers to be used later for inclusion or removal parts of
the input file:
#define [identifier_name]
defines [identifier_name] without giving it a value. This can be used in conjunction with another set of
directives that allow conditional execution.
#undef [identifier_name]
With the conditional directives one can include or remove parts of the input file before execution. The
#if, #elif, #else blocks must be terminated with a closing #endif. There is a maximum of 10 nesting
levels that can be used.
#if [identifier_name]
...
#elif [identifier_name]
...
#else
...
#endif
The #if and #elif (else-if) directive is followed by an identifier. If the identifier is defined then the
statement evaluates to true, otherwise to false.
Example:
#define DEBUG
#define PLOT1
52 Fluka input
...
#if DEBUG
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
GEOEND 100.0 100.0 100.0 -100.0 -100.0 -100.0 DEBUG
GEOEND 50.0 50.0 50.0 &
#else
GEOEND
#endif
...
#if PLOT1
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
PLOTGEOM 1.0 -2000.0
MBWD6L1
-100.0 0.0 -21620.0 100.0 0.0 -21250.0
1.0 0.0 0.0 0.0 0.0 1.0
-100.0 0.0 -21200.0 100.0 0.0 -20800.0
STOP
#endif
The include directive switches the input stream from the original input file to a different file, and back to
the original file after an end-of-file is met. It can be applied also to geometry input. Include directives can
be nested at multiple levels.
#include [path/filename]
where “path” can be an absolute path, or a relative path (relative to the ”launching” directory).
Example:
#include /home/geometries/target2.geom
#include frontplanes.geom
Chapter 7
A complete description of each command will follow in Sections 7.2 to 7.86, in alphabetical order.
Here is a list of the options (commands) that are at the disposal of the Fluka user to prepare an input file.
In the rest of this section, the same commands will be presented by grouping them according to the different
services they can provide.
ASSIGNMAt defines the correspondence between region and material indices and defines regions where
a magnetic field exists
AUXSCORE allows to filter scoring detectors of given estimator type with auxiliary (generalized)
particle distributions and dose equivalent conversion factors, and with isotope ranges
BEAM defines most of the beam characteristics (energy, profile, divergence, particle type)
BEAMAXES defines the axes used for a beam reference frame different from the geometry frame
BEAMPOS defines the starting point of beam particles and the beam direction
BIASING sets importance sampling (Russian Roulette/splitting) at boundary crossings and at high-
energy hadronic collisions on a region by region basis
COMPOUND defines a compound or a mixture or a mixture of isotopes
CORRFACT allows to alter material density for dE/dx and nuclear processes on a region-by-region
basis
DCYSCORE associates selected scoring detectors of given estimator type with user-defined decay times
DCYTIMES defines decay times for radioactive product scoring
DEFAULTS sets Fluka defaults for specified kinds of problems
DELTARAY activates delta-ray production by heavy charged particles and controls energy loss and
deposition
DETECT scores energy deposition in coincidence or anti-coincidence with a trigger, on an event by
event basis
DISCARD defines the particles which must not be transported
ELCFIELD sets the tracking conditions for transport in electric fields and possibly defines an homo-
geneous electric field (not yet implemented)
EMF requests detailed transport of electrons, positrons and photons
EMF–BIAS defines electron/photon leading particle biasing or biases electron/photon interaction
length
EMFCUT sets energy cutoffs for electrons, positrons and photons, for transport and production, or
for switching off some physical interactions
EMFFIX sets the size of electron steps corresponding to a fixed fraction loss of the total energy
EMFFLUO activates production of fluorescence X rays in selected materials
EMFRAY activates Rayleigh (coherent) scattering in selected regions
53
54 Input Commands
EVENTBIN scores energy or star densities in a binning structure independent from the geometry, and
prints the binning output after each “event” (primary history)
EVENTDAT prints event by event the scored star production and/or energy deposition in each region,
and the total energy balance
EXPTRANS requests exponential transformation (“path stretching”) (not yet implemented)
FLUKAFIX sets the size of the step of muons and charged hadrons to a fixed fraction loss of the
kinetic energy
FREE switches to free-format input (geometry excluded)
GCR–SPE initialises Galactic Cosmic Ray calculations
GEOBEGIN starts the geometry description
GEOEND ends the geometry description; can also be used to activate the geometry debugger
GLOBAL issues global declarations about the class of the problem (analogue or weighted) and
about the complexity of the geometry. It also allows to use free format input (geometry
included)
HI–PROPE defines the properties of a heavy ion primary
IONFLUCT calculates ionisation energy losses with fluctuations
IRRPROFI defines an irradiation profile for radioactive decay calculations
LAM–BIAS biases decay length and interaction length
LOW–BIAS requests non-analogue absorption and defines the energy cutoff for low-energy neutron
transport on a region by region basis
LOW–DOWN biases the downscattering probability in low-energy neutron transport on a region by
region basis
LOW–MAT sets the correspondence between Fluka materials and low energy neutron cross section
data
LOW–NEUT requests low-energy neutron transport
MATERIAL defines a material and its properties
MAT–PROP supplies extra information about gaseous materials and materials with fictitious or inho-
mogeneous density and defines other material properties
MCSTHRES defines energy thresholds for applying the multiple Coulomb scattering algorithm to the
transport of muons and charged hadrons
MGNFIELD sets the tracking conditions for transport in magnetic fields and possibly defines a homo-
geneous magnetic field
MULSOPT controls optimisation of multiple Coulomb scattering treatment. It can also request
transport with single scattering
MUPHOTON controls photonuclear interactions of high-energy heavy charged particles (mediated by
virtual photons)
OPEN defines input/output files without pre-connecting
OPT–PROP defines optical properties of materials
OPT–PROD controls Cherenkov and Transition Radiation photon production
PAIRBREM controls simulation of pair production and bremsstrahlung by high-energy heavy charged
particles
PART–THR sets different energy cutoffs for selected particles
PHOTONUC activates photon and electron interactions with nuclei , and photomuon production
PHYSICS controls some physical processes for selected particles
PLOTGEOM calls the Plotgeom package to draw a slice of the geometry
POLARIZA defines polarised beams (only for photons at present)
RADDECAY requests simulation of radioactive decays and sets the corresponding biasing and transport
conditions
RANDOMIZe sets the seeds and selects a sequence for the random number generator
RESNUCLEi scores residual nuclei after inelastic hadronic interactions
ROT–DEFIni defines rotations/translations to be applied to user-defined binnings
Introduction 55
ROTPRBIN sets the storage precision (single or double) and assigns possible rotations/translations
for a given user-defined binning (USRBIN or EVENTBIN)
SCORE defines the energy deposited or the stars to be scored by region
SOURCE tells Fluka to call a user-written source routine
SPECSOUR calls special pre-defined source routines (synchrotron radiation photons, particles created
by colliding beams or by cosmic ray sources)
START defines the number of primary particles to follow, gets a primary particle from a beam
or from a source, starts the transport and repeats until the predetermined number of
primaries is reached
STEPSIZE sets the maximum step size in cm (by region) for transport of charged particles
STERNHEIme allows users to input their own values of the density effect parameters
STOP stops input reading
TCQUENCH sets scoring time cutoffs and/or Birks quenching parameters
THRESHOLd defines the energy threshold for star density scoring, and sets thresholds for elastic and
inelastic hadron reactions
TIME–CUT sets transport time cutoffs
TITLE gives the title of the run
USERDUMP requests a collision file and defines the events to be written
USERWEIGht defines extra weighting to be applied to scored yields, fluences, doses, residual nuclei or
star densities (at scoring time)
USRBDX defines a detector for a boundary crossing fluence or current estimator
USRBIN scores energy, star density or particle fluence in a binning structure independent from
the geometry
USRCOLL defines a detector for a collision fluence estimator
USRGCALL calls user-dependent global initialisation
USRICALL calls user-dependent initialisation
USROCALL calls user-dependent output
USRTRACK defines a detector for a track-length fluence estimator
USRYIELD defines a detector for scoring particle yield around a given direction
WW–FACTOr defines weight windows in selected regions
WW–PROFIle defines energy group-dependent extra factors (“profiles”) to modify the basic setting of
the low-energy neutron weight windows in selected sets of regions, or the low-energy
neutron importances in each region
WW–THRESh defines the energy limits for a RR/splitting weight window
Most Fluka commands are optional, and if anyone of them is not used an appropriate set of defaults is
provided. A few commands, however, are nearly always needed in order to provide a meaningful definition
of the problem to be studied.
In general, for a problem to be fully determined, the following elements need to be defined:
3. the materials
5. setting of parameters, accuracy, conditions, and in general technical directives to the program on how
the calculation shall be performed
Defaults are provided in Fluka for all the above features, but those for items 1, 2 and 3 are unlikely to
56 Input Commands
be useful: therefore the few commands used to define source, geometry and materials are practically always
present in the input file.
For what concerns item 4, the user has a choice of several options to request the estimation of various
radiometric quantities. Of course, there is no much point in running the program without requesting any
result, but in a phase of input preparation it is quite common to have a few runs without any scoring
commands. A typical minimum input containing only specifications for the above items 1, 2 and 3 will
still produce some useful information. Looking at the standard Fluka output, the user can do several
consistency checks, and can get some better insight into the problem from the final statistics and energy
balance (see 9.1).
The last part of problem definition, element 5 (setting) is important but is supported by very robust
defaults. In many cases, the only user concern should consist in choosing the right set of defaults. However,
there are some applications which require explicit setting commands, for instance to request photonuclear
reactions for electron accelerator shielding.
The simplest particle source is pointlike, monoenergetic and monodirectional, that is, a “particle beam”.
Option BEAM, fully described on p. 71, is used to define the particle type and momentum (or energy).
If desired, this option can also define an energy spread, a beam profile shape and an angular divergence.
However, the two latter distributions are restricted to a beam directed in the positive z direction: to describe
divergence and beam profile for an arbitrary beam direction it is necessary to define a beam reference frame
by means of option BEAMAXES (p. 74).
The energy declared with BEAM is used by the program to initialise cross section tables and other
energy-dependent arrays: therefore that command must always be present, even when a more complex source
is described by means of a user routine.
The particle starting point and direction are declared by means of option BEAMPOS (see p. 76).
If BEAMPOS is not present, the beam particles are assumed to start from the origin of the coordinates
0., 0., 0. and to be directed along the z axis. It is important that the starting point be not on a
boundary and not inside a blackhole region. In many cases, starting in vacuum upstream of the actual
target can be convenient.
BEAMPOS can be used also to define sources extended in space (spherical, cylindrical, etc.) Both BEAM
and BEAMPOS commands can be placed anywhere in the input file, before the START command.
Some special particle sources can be defined with command SPECSOUR: synchrotron radiation photons,
cosmic rays, particles produced by colliding beams.
Particle sources with more complicated features (with arbitrary distribution in energy, space, angle,
time, and even with more than one type of particle) can be described by a user-written subroutine SOURCE
(see 13.2.19). To call it, a command SOURCE (see p. 228) must be present in input.
The Combinatorial Geometry used by Fluka is based on two important concepts: bodies and regions . The
first ones are closed solid bodies (spheres, parallelepipeds, etc.) or semi-infinite portions of space (half-spaces,
infinite cylinders) delimited by surfaces of first or second degree. The user must combine bodies by boolean
operations (addition, intersection and subtraction) to perform a complete partition of the space of interest
into regions, namely cells of uniform material composition. One important rule to remember is that inside
the space of interest, defined by means of an external closed body, every point must belong to one and
only one region.
Input for the geometry description, which has its own format and rules, explained in Chap. 8, must
be contained between a GEOBEGIN and a GEOEND card. These two cards follow the normal Fluka input
syntax. An option offered by the GEOBEGIN command (p. 137) is to read the geometry input from a separate
file. Command GEOEND (p. 139) can be used also to invoke the geometry debugger, a check which is always
Introduction 57
strongly recommended.
Geometry input, sandwiched between a GEOBEGIN and a GEOEND card, can be placed anywhere in
the input file (before the START command). It is mandatory in all cases.
An optional command related to geometry is PLOTGEOM. It is used to display sections of the geometry
and needs to read its own input, as explained at p. 209.
The Fluka geometry, described in Chap. ??, and all particle space coordinates (position x, y, z, and
corresponding direction cosines) are based on a right-handed Cartesian reference frame. The origin of the
frame (0, 0, 0) and the direction of the three perpendicular axes can be chosen arbitrarily by the user, but
by default a particle beam and its space characteristics (angular divergence, transversal profile, polarisation)
are referred to a beam direction coincident with the z-axis of the geometry frame of reference. A different
beam direction can be specified by means of command BEAMAXES: see a description of the command (p. 74)
and especially the corresponding Notes.
Also some scoring structures (binnings, see command USRBIN) are defined as meshes parallel to the
reference axes, and in the case of cylindrical binnings the basic axis is again the z-axis. In a similar way,
some geometrical bodies (planes, parallelepipeds, infinite cylinders) are described by their orientation with
respect to the axes. But rototranslation transformations, defined by commands ROT-DEFIni (p. 221) and
ROTPRBIN (p. 224), allow the user to input binnings and geometrical bodies with arbitrary orientation
with respect to axes.
7.1.2.4 Materials
Materials in Fluka are identified by a name (an 8-character string) and by a number, or material index
. Both are used to create correspondences, for instance between region number and material number, or
between material name and neutron cross section name.
Some materials are already pre-defined. Table 5.3 on p. 47 lists the 25 available pre-defined materials
with their default name, index number, density and atomic number and weight. The user can either refer to
any one of them as it is, or override it with a new number, name and other properties, or define a new material.
In the latter two cases, the new material definition is done by option MATERIAL (p. 163). If the material is
not a single element or isotope, but a compound, mixture or alloy, a command COMPOUND (p. 84), extended
on as many cards as necessary, is needed to specify its atomic composition. The correspondence between the
material and the composition is set using the same name in the MATERIAL and in the COMPOUND cards.
Note that material names, if low-energy neutron transport is desired, cannot be assigned arbitrarily but
must match one of the names available in the Fluka cross section library (see Table 10.3 on p. 325).
Once all the materials to be assigned to the various geometry regions have been defined (either
explicitly with MATERIAL or implicitly in the pre-defined list), it is necessary to specify of which material
each region is made, by setting a correspondence material index → region number. This is done by command
ASSIGNMAt (p. 66).
Command ASSIGNMAt is used also to indicate that a magnetic field exists inside one or more given
regions: in this case a command MGNFIELD (p. 172) is needed to specify intensity and direction of a constant
magnetic field, or a complex one defined by a user routine as explained below in 7.1.7. Note that in practice
at least one ASSIGNMAt command must always be present.
A less common kind of correspondence is set by option LOW–MAT (p. 158). By default, the correspon-
dence between a material and a low-energy neutron cross section set is established by name, but in some
circumstances this cannot be done, for instance when two different materials share the same cross section
set, or when two cross section sets have the same name. Option LOW–MAT can be used to set a different
correspondence.
Another Fluka option concerning the definition of materials is MAT–PROP (p. 165). It is used for
a variety of purposes: to describe porous, inhomogeneous or gas materials, to override the default average
58 Input Commands
ionisation potential, to set a threshold energy for DPA calculations and to request a call to a special user
routine when particles are transported in a given material.
Many Fluka input options are not used to describe the radiation transport problem but to issue directives
to the program about how to do the calculations. Other options are used just to select a preferred input
format. We refer to these options as “setting options”.
Thanks to a complete and well-tuned set of defaults, setting options are not always necessary, especially
for a beginner or in a preliminary phase of input preparation. However, an experienced user can often improve
considerably the code performance by a judicious selection of parameters.
The default, fixed input format can be replaced by a free format using option FREE (p. 134) or better
GLOBAL (p. 141). The latter allows to choose free format for both the normal input and the geometry input
separately, and serves also a few other purposes: it can be used to increase the maximum allowed number of
geometry regions, and to force a calculation to be fully analogue (i.e.,, simulating physical reality as directly
as possible, without any biasing to accelerate statistical convergence). A more esoteric capability of GLOBAL,
used mainly for debugging, is to ensure that the random number sequence be exactly reproduced even in
cases where the geometry tracking algorithm has the possibility to follow different logical paths to achieve
the same result.
The difficult task of choosing the best settings for a calculation problem is made much easier by the existence
of several “pre-packaged” sets of defaults, each of which is optimised for a particular type of application. Each
set is chosen by option DEFAULTS , which has to be placed at the beginning of the input file, possibly preceded
only by TITLE or GLOBAL. Several possibilities include hadrotherapy, calorimetry, pure electromagnetic runs
without photonuclear reactions, low-energy neutron runs without gamma production, and others (see p. 92).
One set of defaults is tuned for maximum precision (but not necessarily great time efficiency). Reasonable
defaults, acceptable for most generic routine calculations, are provided in case DEFAULTS is missing . In
most cases, the user has the possibility to use some of the other setting options described below, to override
one or more of the defaults provided by the chosen set.
In any case, it is important to check the list of defaults to make sure that nothing important is missing
or has been overlooked. For instance, photonuclear reactions, which are critical for electron accelerator
shielding, are not provided by any of the available default sets and must be added by the user by means of
the PHOTONUC command.
Another setting option, DISCARD, is used to indicate particles which shall not be transported. The
energy of those particles is not deposited anywhere but is added up in an accumulator which is printed at
the end of the Fluka standard output. Of course it is the user’s responsibility to see that the discarded
particles and their progeny would not give a significant contribution to the requested results.
The concept of multiple scattering is an approximation to physical reality (condensed history approxima-
tion [26]), where charged particles undergo a very large number of single collisions with the atomic electrons,
too many to be simulated in detail except in very special cases. All the theoretical treatments which have been
developed are valid only within certain limits, and none of them gives rules on how to handle material bound-
aries and magnetic fields. Fluka uses an original approach [81], based on Molière’s theory [30, 136–138],
which gives very good results for all charged particles in all circumstances (even in backscattering prob-
lems [75]), preserving various angular and space correlations and freeing the user from the need to control
the particle step length.
Although the default treatment is always more than satisfactory, the user has the possibility to request
various kinds of optimisation, for both electrons/positrons and heavy charged particles. This can be done
Introduction 59
by means of option MULSOPT (p. 174), which offers also the possibility to switch off completely multiple
scattering in selected materials. The latter is a technique used when simulating particle interactions in
gases of very low density such as are contained in accelerator vacuum chambers (gas bremsstrahlung): the
simulation is done for a gas of much larger density and the results are scaled to the actual low density: but
scaling is meaningful only if no scattering takes place.
Another very important feature of option MULSOPT is single scattering, which can be requested in
various degrees at boundary crossing, when the limits of Molière’s theory are not satisfied, and even all
the time (but the latter possibility is to be used only for problems of very low energy, because it is very
demanding in CPU time).
There is also another option connected with multiple scattering, which however concerns only heavy
charged particles such as hadrons and muons: MCSTHRESh (p. 170) allows to set a threshold below which
multiple Coulomb scattering is not performed. However, the CPU time saved is minimal and the option is
not frequently used.
Another aspect of the condensed history approximation is that charged particle transport is performed in
steps. The finite fraction of the particle energy which is lost and deposited in matter in each step is an
approximation for the sum of innumerable tiny amounts of energy lost by the particle in elastic and inelastic
collisions.
In early Monte Carlo programs results could depend critically on the size of the step, mainly due to
the inaccurate determination of the path length correction (ratio between the length of the actual wiggling
path of the particle and that of the straight step connecting the two endpoints). For a more complete
discussion, see [6,75]. The multiple scattering algorithm used by Fluka [81] provides a robust independence
of the results from the step size, but for problems where a special accuracy is requested, or when magnetic
fields are present, it is possible for the user to override the default step length. Two options control the
maximum fractional energy loss per step: EMFFIX for electrons and positrons (p. 118), and FLUKAFIX for
muons and charged hadrons (p. 133). The second one is seldom used, however, except in problems of very
large dimensions typical of cosmic ray research. Option STEPSIZE (p. 232) is used instead to limit the
absolute length of the step, independent of the energy lost. Contrary to EMFFIX and FLUKAFIX, it works
also in vacuum. While its use is highly recommended in problems with magnetic fields, to ensure that steps
be smaller than the dimensions of the current region and of those that border it, when no magnetic fields
are present this option should better be avoided, as it would imply no obvious advantage and could even
downgrade performance.
Setting energy cutoffs, for both transport and production, is an important responsibility of the user, who
is interested in choosing the best compromise between accuracy and time efficiency. Each of the parameter
sets available via option DEFAULTS , including the basic defaults set which exists when that option has not
been explicitly requested, offers a well-optimised choice for the corresponding class of applications, with only
one exception. But even so, it is often convenient to override some of the default cutoffs in order to improve
performance. The exception concerns the default particle production cutoffs for electrons, positrons and
photons, which are dependent on other settings (see EMFCUT below).
Transport cutoffs, or thresholds, are set with command PART–THRes (p. 196) for hadrons and muons,
with EMFCUT (p. 113) for electrons, positrons and photons, and with LOW–BIAS (p. 154) for low-energy
neutrons. Despite the similar functionality of the three commands, there are important differences in their
syntax and in the way the threshold is implemented for the three families of particles. PART–THRes can
assign different transport thresholds to different particles, but the thresholds are the same in all materials
and regions. When the hadron or muon energy becomes lower than the relevant threshold, the particle is not
stopped but ranged out in a simplified way. Because the electron and photon cutoffs are more critical with
respect to calculation accuracy, EMFCUT can assign transport thresholds on a region basis: on the other
hand no ranging out is performed, due to the difficulty to clearly define electron ranges. For low-energy
neutrons, the transport threshold is set by LOW–BIAS also on a region basis, but as a group number rather
60 Input Commands
than an energy.
Two input commands can set particle production cutoffs, respectively for heavy particles and for
electrons, positrons and photons.
Thresholds for delta-ray production by charged hadrons and muons are assigned, on a material basis, by
means of option DELTARAY (p. 98). Energy transfers to electrons lower than the threshold are handled in
the continuous slowing down approximation.
Production of bremsstrahlung by electrons and of Møller/Bhabha secondary electrons is simulated explicitly
above thresholds set on a material basis with option EMFCUT (p. 113). Defaults for electron and photon
production cutoffs are dependent on other settings in a complex way. Therefore it is recommended to check
the values printed on standard output, or to set EMFCUT production cutoffs explicitly for each material.
Note also that the same EMFCUT command is used to set both transport and production cutoffs: but the
setting is done by region in the first case and by material in the second.
To complete the list of commands used for cutoff setting, we will mention THRESHOLd (p. 238), which
is used to set an energy threshold for star scoring. In principle, a “star” is any high energy inelastic hadron
interaction (spallation) and star density has always been one the quantities which can be scored by Fluka.
Since a popular technique to estimate induced radioactivity was based originally on the density of stars
produced by hadrons with energies higher than 50 MeV, the possibility to set a scoring energy limit is
provided.
For time-dependent calculations, two time cutoff options are available: one for particle transport, TIME–CUT
, and one for scoring, TCQUENCH. While option TIME–CUT (p. 239) sets a particle-dependent time limit after
which the corresponding particle history is terminated, the limits set by TCQUENCH (p. 236) are assigned to
selected binnings. Scoring contributions to a binning by particles having exceeded the corresponding time
limit are ignored, but particle transport continues, possibly contributing to other detector scores.
Transport of charged particles can be done in many ways: without delta ray production and ionisation
fluctuations (continuous slowing down approximation), with ionisation fluctuations and no delta rays, with
delta ray production above a chosen energy threshold and no ionisation fluctuations below the threshold, and
with both: delta rays above the threshold and ionisation fluctuations below it. Depending on the application
type chosen with option DEFAULTS, different defaults and thresholds apply, which can be modified by the
user by means of options IONFLUCT, DELTARAY and EMFCUT. Option IONFLUCT (p. 145) is used to request
(restricted) ionisation fluctuations on a material basis. In Fluka, these fluctuations are not simulated
according to Landau or Vavilov theory but according to an original statistical approach [73]. They can be
requested separately for electrons and positrons and for muons and charged hadrons. Delta ray production
thresholds are instead set for the two particle families by two separate options, which have already been
mentioned above in the context of production cutoffs (7.1.3.5): EMFCUT (p. 113) and DELTARAY (p. 98).
DELTARAY can be used also to define (and print) the mesh width of the stopping power tabulations used by
the program.
The user has also the possibility to change the default parameters used in the calculation of stopping
power. Command STERNHEIme (7.68) allows to change the density effect parameters, and MAT–PROP (p. 165)
can set, in addition to several other material properties, a user-defined average ionisation potential.
In Fluka, an effort has been made to implement a full cross-talk between different radiation components
(hadronic, muonic, electromagnetic, low-energy neutrons, heavy ions, optical photons). However, some
components are not activated by default, and others are only activated in some of the available default
settings. Input options are provided to switch them on and off.
In a similar way, some physical effects may need to be activated, overriding the chosen defaults. On
the other hand, in some cases it can be of interest (but possibly dangerous!) to ignore some effects. A
Introduction 61
High-energy hadrons and muons are always generated and transported, except with defaults settings
EM–CASCA (p. 93) and NEUTRONS (p. 94) (however, they cannot be requested overriding these two de-
faults). To suppress them, one can use command DISCARD (p. 104).
Option EMF (ElectroMagnetic Fluka (p. 107) can be used to request electron, positron and photon
transport, and also to ask for its suppression (the latter could be obtained also by discarding electrons,
positrons and photons by means of DISCARD).
Low-energy neutron transport (if not already on by default) can be activated with option LOW–NEUT.
Explicit suppression is not possible: but the same effect can be obtained using option LOW–BIAS to set a
cutoff at energy group 1.
Heavy ion transport (only ionisation energy loss, without nuclear interactions) is implicit with some
default settings, while with others it is not available. Details can be found in the description of command
IONTRANS (p. 148). The same command can be used also to request heavy ion interactions using different
event generators: in this case the corresponding libraries must be linked.
A special option, HI–PROPErt (p. 144), is necessary to define the properties of a heavy ion primary,
since the particle type input via the BEAM command can only be a generic heavy ion.
Generation and transport of optical photons is available only on explicit user request. Activation
(and deactivation) are requested via OPT–PROD (for Cherenkov, transition radiation or scintillation photon
production) and OPT–PROP (transport). See respectively p. 183 and p. 187.
Some physical effects are automatically activated, but only when certain default sets are in force (see option
DEFAULTS on p. 92 and Table 7.1 on p. 97), and can be switched on or off with appropriate commands. The
command to simulate fluorescence is EMFFLUO (p. 120), that for Rayleigh scattering and Compton binding
corrections and Doppler broadening is EMFRAY (p. 122), while for multiple scattering there are MULSOPT
and MCSTHRESh which we have already introduced in 7.1.3.3. High-energy effects such as production of
bremsstrahlung and electron pairs by heavy charged particles (in particular muons) are regulated by option
PAIRBREM (p. 194).
A few physical effects need to be requested explicitly, whatever the defaults. These are photon
polarisation (command POLARIZAti, p. 212), polarisation of pion, kaon and muon decays (command PHYSICS,
p. 202), photonuclear reactions (PHOTONUC, p. 198) and muon hadronic interactions via virtual photons
(MUPHOTON, p. 178).
In some cases, it is also possible to switch off some important effects to study the relative importance
of different processes. Command THRESHOLd allows to set a lower energy limit for hadron elastic scattering
and inelastic reactions, and EMFCUT does the same with various kinds of electron and photon interactions.
The user must bear in mind, however, that results obtained suppressing effects which are critical for the
development of the electromagnetic or hadronic cascade are unlikely to be physically correct.
Any result in a Monte Carlo calculation is obtained by adding up the contributions to the “score”, or “tally”
of a detector defined by the user. A detector is the Monte Carlo equivalent of a measurement instrument.
Each “estimator” (detector type) is designed to estimate one or more radiometric quantities, and the final
score is a statistical estimation of the average value of the corresponding population. As in experimental
measurements, it is possible to calculate a standard deviation by running several independent calculations.
No default detector is available: each scoring option must be explicitly requested. There are different
input options corresponding to different types of detector. The simplest is SCORE which provides energy
62 Input Commands
deposition (proportional to dose) or star density in every region of the geometry. “Stars” is an old name for
inelastic hadron reactions which derives from early experiments with nuclear emulsions.
The same quantities can be scored in a uniform spatial mesh independent of geometry, called a “bin-
ning”, by means of option USRBIN (p. 249). There are several types of binnings: Cartesian, 2D-cylindrical,
3D-cylindrical and even more complex phase space structures. In addition to dose and star density, it is
possible to use USRBIN to score particle fluence distributions in space. USRBIN results are often displayed
as colour plots where each colour corresponds to a pre-defined range of values. A post-processing program
for this purposes (Pawlevbin) is available in the directory $FLUPRO/flutil, and a GUI interface can be
downloaded from the Fluka website www.fluka.org.
Fluence, averaged over the volume of a given geometry region, can be calculated with options
USRTRACK (p. 262) and — less often — USRCOLL (p. 257). The first is a “track-length estimator” (it
estimates fluence as volume density of particle trajectory lengths), and the second is a “collision estimator”
(fluence is estimated as volume density of collisions weighted with the particle mean free path). Of course,
USRCOLL can be used only in a region of matter, while USRTRACK works also in vacuum. Both options
provide fluence differential energy spectra.
Another common scoring option is USRBDX (p. 246), which also calculates fluence, but averaged over
the boundary between two geometry regions. It is a “boundary crossing estimator”, which estimates fluence
as the surface density of crossing particles weighted with the secant of the angle between trajectory and
normal to the boundary at the crossing point. Option USRBDX can also calculate current, i.e., a simple
counter of crossings, not weighted by inverse cosine: but despite a widespread credence, current is only
seldom a quantity worth calculating. The results of USRBDX can account on request for particles crossing
the boundary from either side or from one side only, and are in the form of double-differential energy and
angular spectra. The angle considered is again that with the normal to the boundary at the crossing point.
USRYIELD is a multi-purpose estimator option, which can estimate several different double-differential
quantities. The main one is an energy-angle double-differential yield of particles escaping from a target, the
angle in this case being with respect to a fixed direction. Energy and angle can be replaced by many other
variables which are mostly of the same kind, such as momentum and rapidity. But it is possible also to score
yields as a function of charge and LET (Linear Energy Transfer).
Production of residual nuclei can be obtained with command RESNUCLEi. The results, which are
closely related to induced activity and dose rate from activated components, may include nuclei produced
in low-energy neutron interactions, provided the corresponding information is available in the neutron cross
section library for the materials of interest (see Table 10.3 in Section 10.4).
Typical particle physics applications, in particular calorimetry, require separate scoring event by event
(that is, results are printed after each primary particle history). Two commands, EVENTBIN (p. 124) and
EVENTDAT (p. 126), are respectively the event-equivalent of USRBIN and SCORE which have been introduced
before. A third command, DETECT (p. 101), allows to score event by event energy deposition simulating a
detector trigger, defining coincidences and anti-coincidences. All these options are incompatible with any
biasing. It is suggested to use command GLOBAL (p. 141) to make sure that the run will be completely
analogue.
There are a few commands which are used to modify some of the scoring options already described.
TCQUENCH (p. 236), which has already been shown to define a time cutoff, can be used also to apply a
quenching factor (Birks factor) to energy deposition scored with USRBIN or EVENTBIN. ROT–DEFI (p. 221)
and ROTPRBIN (p. 224) allow to define roto-translation transformations for binnings not aligned with the
coordinate axes. ROTPRBIN can be used also to set the binning storage precision: a space saving feature,
which is useful mainly when scoring event by event with EVENTBIN.
Introduction 63
It is possible to transport and score in the same run also the beta and gamma radiation emitted in the decay
of radioactive nuclei produced in the hadronic or electromagnetic cascade. Several options are available for
this purpose: RADDECAY (p. 214) is used to request the simulation of radioactive decays, IRRPROFIle (p. 149)
defines a time profile for the intensity of the primary particles, DCYTIMES (p. 91) requests one or more decay
times at which the desired scoring shall occur, and DCYSCORE (p. 89) associates selected scoring detectors
to the decay times so requested.
When run in fully analogue mode, Fluka allows the user to study fluctuations and correlations, and to
set up a direct simulation of physical reality where all moments of phase space distributions are faithfully
reproduced. On the other hand, in the many applications where only quantities averaged over many events
are of interest, it is convenient to use calculation techniques converging to the correct expectation values but
reducing the variance (or the CPU time, or both) by sampling from biased distributions. This is especially
useful in deep penetration calculations, or when the results of interest are driven by rare physical interactions
or cover a small domain of phase space.
Fluka makes available several biasing options. Some are easy to use, but others require experience
and judgment, and often a few preliminary preparation runs are needed to optimise the biasing parameters.
The easiest biasing command is fittingly called BIASING (p. 80). It provides two different kinds of vari-
ance reduction: Multiplicity Reduction and Importance Biasing, which is based on the two complementary
techniques Geometry Splitting and Russian Roulette (RR).
Splitting and Russian Roulette are two classical variance reduction techniques, which are described
in most textbooks on Monte Carlo [48, 122]. A detailed description of how they are implemented in Fluka
is available on p. 81. Importance biasing consists in assigning an importance value to each geometry region.
The number of particles moving from a region to another will increase (by splitting) or decrease (via RR)
according to the ratio of importances, and the particle statistical weight will be modified inversely so that
the total weight will remain unchanged. In this way, the user can strive to keep the particle population
constant, making up for attenuation, or to make it decrease in regions far from the detectors where there
is a lower probability to contribute to the score. In Fluka, importance biasing can be done separately for
hadrons/muons, electrons/positrons/photons and low-energy neutrons.
Multiplicity Reduction is a simple technique which was introduced for the first time in Fluka (now
it has been adopted also by other programs), in order to decrease the computer time needed to simulate a
very high energy hadron cascade. At energies of several hundred GeV and more, the number of secondaries
produced in a hadron-nucleus interaction is very large and the total number can increase geometrically in the
following interactions, requiring an unacceptably long computer time. Since many secondaries are particles
of the same kind and with a similar angular and energy distribution, the user can decide to follow only a
region-dependent fraction of them.
In a similar way, option LAM–BIAS can be used to increase the probability of hadronic interactions,
and in particular photohadron reactions. These are the dominant reactions for high-energy electron accel-
erator induced activity and shielding design, but because their cross section is small compared to that of
electromagnetic effects, analogue sampling would be very inefficient. The same command can help to get a
64 Input Commands
higher probability of hadron interaction in a thin target. It can also be used to bias a particle decay length
(for instance, to enhance muon or neutrino production) and the emission angle of the decay secondaries in
a direction indicated by the user.
The weight window is a very powerful biasing technique, not based on relative importances, but on the
absolute value of particle weight. The user sets an upper and a lower limit for the particle weight in each
geometry region, possibly tuned per type of particle and energy. Splitting and RR will be applied so that
the weight of all relevant particles will have a value between the two limits. In addition to controlling the
particle population, this technique helps also to “damp” excessive weight fluctuations due to other biasing
options.
Its use is not as easy as that of importance biasing, because it is necessary to have at least a rough
idea of what are the average weights in different regions. Special splitting and RR counters can be printed
on request to help setting the window parameters, setting SDUM = PRINT in command BIASING (p. 80). An
explanation about the meaning of the counters can be found on p. 309. Weight window setting is done in
Fluka by three input commands: WW–FACTOr (p. 270), WW–THRESh (p. 275) and WW–PROFIle (p. 273).
The first two commands must be used together: WW–FACTOr sets the upper and lower weight limits per
region, while WW–THRESh defines energy limits within which the weight window must be applied, and the
particles to which it is to be applied. The third option is reserved to low-energy neutrons, whose transport
characteristics often require a more detailed biasing pattern: WW–PROFIle allows indeed to tune the weight
window by neutron energy group.
The special multigroup transport structure used by Fluka for low-energy neutrons calls for some biasing
options specific to these particles. We have just introduced the weight window command WW–PROFIle. Two
more options are LOW–BIAS (p. 154), which has already been mentioned before in the context of energy
cutoffs, but which is used also to set a user-defined non-analogue absorption probability, and LOW–DOWN
(p. 156), by which it is possible to bias neutron thermalisation (downscattering). The latter, however, is an
option recommended only to users with a good knowledge and experience of neutronics.
The purpose of several Fluka input options is to trigger calls to user routines (user routines are described in
Chap. 13). One of the most important ones is SOURCE (p. 228), which makes Fluka get the characteristics
of its primary particles from subroutine SOURCE instead of from options BEAM and BEAMPOS. This option
allows to pass to the subroutine several parameters, thus allowing to drive it from input without the need
to re-compile it. Note that even when using a user-written source, it is still necessary to have in input a
BEAM card indicating the maximum expected energy of a primary particle, so that the program can prepare
appropriate cross section tables. If command SOURCE is present, but no SOURCE routine has been linked, the
default one in the Fluka library will be called, which leaves unchanged the particle type, energy, position
etc. as defined by BEAM and BEAMPOS.
Command USERWEIGht (p. 244) can call 5 different user routines used to modify a scored quantity
(at the time of scoring). The routines are:
– FLUSCW is a function returning a multiplication factor for fluences (13.2.6). A typical application is to
convert a fluence to dose equivalent.
– COMSCW is a function returning a multiplication factor for star densities and doses (13.2.2). Common
application: converting energy deposition to dose.
– USRRNC is a subroutine providing a hook for user controlled scoring of residual nuclei (13.2.30).
– ENDSCP is a subroutine performing a displacement of the energy deposited in a particle step, for instance
to account for an instrument drift (13.2.4).
– FLDSCP is a subroutine performing a displacement (drift) of the track corresponding to a particle step
(13.2.5).
Introduction 65
Complex magnetic fields can be defined or read from a map by a user routine MAGFLD (13.2.11. Calls
to the routine are activated by command MGNFIELD (p. 172).
A collision file (also called a collision tape, or a phase space file) is a file on which Fluka writes
on request details of user-selected events: particle trajectories, energy deposition events, source particles,
boundary crossings, physical interactions, etc. This task is performed by subroutine MGDRAW (13.2.13, which
is called if option USERDUMP is requested in input (p. 241). The default routine present in the Fluka
library can be driven as it is, without re-compilation, by setting some of the USERDUMP parameters, but
can also be modified and re-compiled to adjust to specific needs of the user. A typical simple task is to draw
particles trajectories. Another frequent application of USERDUMP is to perform a calculation in two steps,
where the second step uses the collision file as a source. In principle it is also possible to use subroutine
MGDRAW for scoring, for instance by interfacing it to some histogramming package, as it is customary in some
other Monte Carlo programs. However, in general this is discouraged in Fluka, unless the desired quantity
cannot be scored via the standard Fluka input commands, which is very rare. The Fluka scoring options
are indeed highly optimised and well checked against possible errors and artefacts. It is very unlikely that a
user might be able to achieve in a short time the same level of reliability. In any case, user-written scoring
via MGDRAW must be avoided in all runs where biasing is present, because to handle correctly the particle
weights requires other Fluka tools which are not available to the normal user.
Three more input options activating calls to user routines are USRICALL (p. 260), USROCALL (p. 261)
and USRGCALL (p. 259) . The first two allow the user to issue a call respectively to an initialisation routine
USRINI (13.2.27) and to an output routine USROUT (13.2.29). The third one activates a call to routine USRGLO
(13.2.26), which performs a global initialisation before any other made by Fluka.
7.1.8 Miscellaneous
Command RANDOMIZe starts a new independent random number sequence. It can be omitted only in a first
run, but it is compulsory if a sequence of independent runs is desired in order to calculate statistical errors.
This command can also be used to start different, independent random number sequences, allowing to run
several identical jobs in parallel.
Command STOP, inserted at any point in the input file, interrupts the reading. Any further input
card is ignored. It may be made to follow a PLOTGEOM command and the corresponding input, so that the
Plotgeom program is executed, but no Fluka simulation is started.
Finally, command START is always needed to give the program the signal to begin the calculation.
The command includes the number of primary histories to be simulated. A STOP command may follow, but
it is not necessary since it is assumed to be present by default.
66 ASSIGNMAt
7.2 ASSIGNMAt
Defines the correspondence between region indices (or names) and material
indices (or names). It defines also regions with magnetic and/or electric
field. There is the possibility of selectively changing region material to vac-
uum/blackhole (and/or switching on/off possible fields) when transporting
radioactive decay products. Radioactive decay products originating from
regions switched to vacuum/blackhole are ignored. This is helpful for situ-
ations where the emissions of an activated object in a complex environment
have to be evaluated standalone.
No default
WHAT(2) = lower bound (or name corresponding to it) of the region indices with material index equal or
name corresponding to WHAT(1)
(“From region WHAT(2). . . ”)
Default = 2.0
WHAT(3) = upper bound (or name corresponding to it) of the region indices with material index equal
or name corresponding to WHAT(1)
(“. . . to region WHAT(3). . . ”)
Default = WHAT(2)
WHAT(5) = 1.0: a magnetic field is present (no electric field) in the region(s) defined by WHAT(2), (3),
and (4), for both prompt and radioactive decay products
= 2.0: an electric field is present (no magnetic field) in the region(s) defined by WHAT(2),
(3), and (4), for both prompt and radioactive decay products
= 3.0: both an electric field and a magnetic field are present in the region(s) defined by
WHAT(2), (3), and (4), for both prompt and radioactive decay products
= 4.0: a magnetic field is present (no electric field) in the region(s) defined by WHAT(2), (3),
and (4), for prompt products only
= 5.0: an electric field is present (no magnetic field) in the region(s) defined by WHAT(2),
(3), and (4), for prompt products only
= 6.0: both an electric field and a magnetic field are present in the region(s) defined by
WHAT(2), (3), and (4), for prompt products only
= 7.0: a magnetic field is present (no electric field) in the region(s) defined by WHAT(2), (3),
and (4), for radioactive decay products only
= 8.0: an electric field is present (no magnetic field) in the region(s) defined by WHAT(2),
(3), and (4), for radioactive decay products only
= 9.0: both an electric field and a magnetic field are present in the region(s) defined by
WHAT(2), (3), and (4), for radioactive decay products only
= 0.0: ignored
< 0.0: resets the default (no field) in the region(s) defined by WHAT(2), (3), and (4)
Default = 0.0 (ignored)
WHAT(6) = material index, or material name, for a possible alternate material for radioactive decay
product transport. Only vacuum and blackhole (external vacuum) are allowed. (See Note 5
below).
Input Commands 67
Default : (option ASSIGNMAt not requested): not allowed! each region must be explicitely assigned a
material, or vacuum or blackhole (see Note 3).
No magnetic and no electric field is the default for all regions.
Notes
1. Several ASSIGNMAt definitions are generally necessary to assign a material to all regions. Standard material
names and their numbers are listed in Table 5.3 (p. 47. They may be redefined and others may be added (see
Note 5 to command MATERIAL, p. 163).
2. Overlapping region indices can be given in several ASSIGNMAt definitions, each definition overriding the earlier
ones. This makes the assigning of materials very convenient (see Example below).
The same can be done even if region names are used instead of indices: the region numbers correspond to the
order in which they appear in the geometry input. Anyway, the name-index correspondence can be found on
standard output after a short test run.
3. Option ASSIGNMAt must always be present. If a region has not been assigned a material, the program stops at
initialisation time. Notice that this was different in previous versions of Fluka, where blackhole was assigned
by default.
4. Magnetic field tracking is performed only in regions defined as magnetic field regions by
WHAT(5) = 1.0, 3.0, 4.0, 6.0, 7.0, 9.0. It is strongly recommended to define as such only re-
gions where a magnetic field actually exists, due to the less efficient and less accurate tracking algorithm used
in magnetic fields. To define a region as one with magnetic field and to return systematically B = 0.0 in that
region via the user subroutine MAGFLD (p. 356) must be absolutely avoided (see MGNFIELD, p. 172).
5. There is the possibility of selectively changing regions to vacuum/blackhole (and/or switching on/off possible
fields) when transporting radioactive decay products. Radioactive decay products originating from regions
switched to vacuum/blackhole are ignored. This is helpful for situations where the emissions of an activated
object in a complex environment have to be evaluated standalone.
7.3 AUXSCORE
WHAT(2) = particle (or particle family) to be considered as a filter for the associated scoring card
>-100: particle or particle family code
≤-100: Isotope coding (to filter ions). To select atomic number Z, mass number A and
isomeric state M :
WHAT(2) = −(Z × 100 + A × 100000 + M × 100000000).
Z = 0 means all atomic numbers for the given A
A = 0 includes all mass numbers for a given Z
M = 0 includes all ground and isomeric states.
To select only the ground state set M = 9
Default = 201.0 ALL--PART (all particles)
WHAT(4) = lower bound index (or corresponding name) of the indices of the detectors in which the
associated scoring is activated (See Note 1)
(“From detector WHAT(3). . . ”)
Default = 1.0
WHAT(5) = upper bound index (or corresponding name) of the indices of the detectors in which the
associated scoring is activated (See Note 1)
(“. . . to detector WHAT(4). . . ”)
Default = WHAT(4)
SDUM = For dose equivalent (DOSE–EQ) scoring , the user can provide the energy dependent coeffi-
cients for the conversion of fluence to effective dose and ambient dose equivalent for neutrons,
protons, charged pions, muons, photons and electrons [143], [181]
Input Commands 69
2. Effective dose sets from ICRP74 and Pelliccioni data calculated with the Pel-
liccioni radiation weighting factors Wr
(a) EAPMP : Anterior-Posterior irradiation
(b) ERTMP : Rotational irradiation geometry
(c) EWTMP : WORST possible geometry for the irradiation
(a) AMBGS
Default = AMB74
Notes
1. USRBIN/EVENTBIN detectors are counted together, and so are USRTRACK and USRCOLL.
2. Conversion coefficients for Effective Dose ”WORST” irradiation geometry are obtained choosing at each energy
the largest value of the coefficients for the other geometries.
3. For photons and electrons, only the sets EAP74, ERT74, EWT74 and AMB74 are implemented. If sets from the
second group (EAPMP, ERTMP, EWTMP) are requested, the respective set from the first group will be used
instead. For set AMBGS, zero values are returned.
4. Dose conversion coefficients exist only for some particle types: hadrons, muons, photons, electrons/positrons.
For all other particle types, zero factor will be returned. This is particularly important for heavy ions where
zero factor will be scored (see Note 5)
5. For particles such as heavy ions, for which fluence conversion factors are not available, it is possible to score
with USRBIN the generalised particle DOSEQLET, i.e., dose equivalent as defined by ICRU: H = D × Q(L),
where L is the unrestricted Linear Energy Transfer in water.
6. A USRBIN/EVENTBIN detector can be associated to AUXSCORE only if the binned quantity is scored along
a step (10.0 ≤ WHAT(1) ≤ 18.0). In particular, AUXSCORE cannot be used to filter activity or specific
activity.
Examples:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....
USRBIN 10.0 208.0 -50.0 10.0 10.0 10.0Ene.p
USRBIN -10.0 -10.0 -10.0 100.0 100.0 100.0&
AUXSCORE 2.0 1.0 1.0
* The above AUXSCORE card will filter the energy scoring of the
* USRBIN card to only the energy that is deposited by protons
70 AUXSCORE
7.4 BEAM
Defines several beam characteristics: type of particle, energy, divergence
and profile
WHAT(2) > 0.0: beam momentum spread in GeV/c. The momentum distribution is assumed to be
rectangular
< 0.0: |WHAT(2)| is the full width at half maximum (FWHM) of a Gaussian momentum
distribution (FWHM = 2.355 σ).
This value is available in COMMON BEAMCM as variable DPBEAM. It can be used or mod-
ified in subroutine SOURCE (p. 363) if command SOURCE (p. 228) is present in input.
However, in that case the momentum/energy sampling must be programmed by the
user.
Default = 0.0
WHAT(4) ≥ 0.0: If WHAT(6) > 0.0, beam width in x-direction in cm for a beam directed along the
positive z-axis (unless a different direction is specified by command BEAMAXES: see
Note 4). The beam profile is assumed to be rectangular.
If WHAT(6) < 0.0, WHAT(4) is the maximum radius of an annular beam spot
< 0.0: |WHAT(4)| is the FWHM of a Gaussian profile in x-direction (whatever the value of
WHAT(6)) for a beam directed along the positive z-axis (unless a different direction is
specified by command BEAMAXES: see Note 4)
This value is available in COMMON BEAMCM as variable XSPOT. It can be used or modified
in subroutine SOURCE (p. 363) if command SOURCE (p. 228) is present in input.
However, in that case the x-profile sampling must be programmed by the user.
Default = 0.0
WHAT(5) ≥ 0.0: if WHAT(6) > 0.0, beam width in y-direction in cm for a beam directed along the
positive z-axis (unless a different direction is specified by command BEAMAXES: see
Note 4). The beam profile is assumed to be rectangular.
72 BEAM
If WHAT(6) < 0.0, WHAT(5) is the minimum radius of an annular beam spot
< 0.0: |WHAT(5) | is the FWHM of a Gaussian profile in y-direction (whatever the value of
WHAT(6)) for a beam directed along the positive z-axis (unless a different direction is
specified by command BEAMAXES: see Note 4)
This value is available in COMMON BEAMCM as variable YSPOT. It can be used or modified
in subroutine SOURCE (p. 363) if command SOURCE (p. 228) is present in input.
However, in that case the y-profile sampling must be programmed by the user.
Default = WHAT(4)
WHAT(6) < 0.0: WHAT(4) and WHAT(5), if positive, are interpreted as the maximum and minimum
radii of an annular beam spot. If negative, they are interpreted as FWHMs of Gaus-
sian profiles as explained above, independent of the value of WHAT(6)
≥ 0.0: ignored
Default = 0.0
SDUM = beam particle name. Particle names and numerical codes are listed in the table of Fluka
particle types (see Table 5.1 on p. 43).
This value can be overridden in user routine SOURCE (p. 363) (if command SOURCE (p. 228) is
present in input) by assigning a value to variable IJBEAM equal to the numerical code of the
beam particle.
For heavy ions, use the name HEAVYION and specify further the ion properties by means of
option HI–PROPErt (p. 144). In this case WHAT(1) will mean the energy (or momentum) per
nuclear mass unit, and not the total energy or momentum.
The light nuclei 4 He, 3 He, triton and deuteron are defined with their own names (4–HELIUM,
3–HELIUM, TRITON and DEUTERON) and WHAT(1) will be the total kinetic energy or momen-
tum.
For radioactive isotopes, use the name ISOTOPE and specify further the isotope properties by
means of option HI–PROPErt (p. 144). In this case WHAT(1) and WHAT(2) are meaningless.
See Note 8 for instructions on how to run cases where the source is a radioactive isotope.
Neutrino and antineutrino interactions are activated by SDUM = (A)NEUTRIxx.
Neutrino interactions are forced to occur at the point (or area) defined in the BEAMPOSit card.
[Not yet implemented: For optical photons, use the name OPTIPHOT and specify further
the transport properties by material by means of option OPT–PROP (p. 187).]
Default = PROTON
Default (command BEAM not requested): not allowed! The WHAT(1) value of the BEAM command is
imperatively required, in order to set up the maximum energy of cross-section tabulations.
Notes
1. Simple cases of sources uniformly distributed in a volume can be treated as SDUM options of command
BEAMPOSit (p. 76). Other cases of distributed, non monoenergetic or other more complex sources should be
treated by means of a user-written subroutine SOURCE (p. 363) as explained in the description of the SOURCE
option (p. 228), or, in some special cases, by means of a pre-defined source invoked by command SPECSOUR
(see p. 230, 381, 378, 390). In particular, the BEAM definition cannot handle beams of elliptical cross section
and rectangular profile. However, even when using a SOURCE subroutine, the momentum or kinetic energy
defined by WHAT(1) of BEAM is meaningful, since it is taken as maximum energy for several cross section
tabulations and scoring facilities.
Advice: when a user-written SOURCE is used, set WHAT(1) in BEAM equal to the maximum expected momen-
tum (or energy) of any particle to be transported.
2. A two-dimensional distribution, Gaussian with equal variances in x and y, results in a radial Gaussian distri-
Input Commands 73
Examples:
* The following BEAM card refers to a 100 keV pencil-like
* electron beam:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
BEAM -1.E-4 0.0 0.0 0.0 0.0 1.0 ELECTRON
7.5 BEAMAXES
Defines the axes used for a beam reference frame different from the geometry
frame
WHAT(1) = cosine of the angle between the x-axis of the beam reference frame and the x-axis of the
geometry frame
Default : no default
WHAT(2) = cosine of the angle between the x-axis of the beam reference frame and the y-axis of the
geometry frame
Default : no default
WHAT(3) = cosine of the angle between the x-axis of the beam reference frame and the z-axis of the
geometry frame
Default : no default
WHAT(4) = cosine of the angle between the z-axis of the beam reference frame and the x-axis of the
geometry frame
Default : no default
WHAT(5) = cosine of the angle between the z-axis of the beam reference frame and the y-axis of the
geometry frame
Default : no default
WHAT(6) = cosine of the angle between the z-axis of the beam reference frame and the z-axis of the
geometry frame
Default : no default
Default (option BEAMAXES not requested): the beam frame coincides with the geometry frame
Notes
1. Option BEAM (p. 71) describes a simple pencil beam, or also a beam simply distributed in space (angular
divergence and transversal profile), provided the beam axis coincides with the z-axis of the input geometry.
Also a possible beam polarisation described by option POLARIZAti (p. 212) refers to a beam with its axis
coinciding with the geometry z-axis.
The purpose of option BEAMAXES is to allow the user to define angular divergence, transversal profile and po-
larisation for a beam of arbitrary direction, either constant as defined by option BEAMPOSit, or not necessarily
known in advance as provided by a user SOURCE routine (13.2.19). For this purpose, the user can define diver-
gence, profile and polarisation in a beam reference frame. Option BEAMAXES establishes the correspondence
between beam and geometry reference frame.
2. The origin of the beam reference frame coincides always with that of the geometry frame.
3. The user needs to input only the direction cosines of the x- and of the z-axis of the beam frame. The direction
of the y-axis is determined by the program as the vector product ~ z×~ x.
Input Commands 75
4. If the x- and z-axes defined with BEAMAXES are not exactly perpendicular (in double precision!) the program
forces perpendicularity by adjusting the cosines of the x-axis.
5. The direction cosines of the x- and z-axes do not need to be exactly normalised to 1. The code takes care of
properly normalising all cosines.
Example:
* The next option cards describe a 10 GeV proton beam with a divergence of
* 50 mrad and a gaussian profile in the "beam x"-direction and in the
* "beam y"-direction described by standard deviations sigma_x = 1. cm
* (FWHM = 2.36 cm) and sigma_y = 0.5 cm (FWHM = 1.18 cm). The beam starts
* from point (0,0,0) and is directed in a direction perpendicular to the
* "geometry x" axis, at 45 degrees with respect to both "geometry y" and
* "geometry z". The "beam x" axis has cosines 1,0,0 and the "beam z"
* axis has cosines 0, cos(pi/4), cos(pi/4)
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
BEAM -10.0 0.0 50.0 -2.36 -1.18 1.0 PROTON
BEAMPOS 0.0 0.0 0.0 0.0 0.7071068 0.0
BEAMAXES 1.0 0.0 0.0 0.0 0.7071068 0.7071068
76 BEAMPOSit
7.6 BEAMPOSit
Defines the coordinates of the centre of the beam spot (i.e., the point from
which transport starts) and the beam direction. Also allows to define some
spatially extended sources.
WHAT(4) = direction cosine of the beam with respect to the x-axis of the beam reference frame (namely
the geometry reference frame unless defined differently with BEAMAXES).
This value is available in COMMON BEAMCM as variable UBEAM. It can be used or modified in
subroutine SOURCE (p. 363) if command SOURCE (p. 228) is present in input.
Default = 0.0
WHAT(5) = direction cosine of the beam with respect to the y-axis of the beam reference frame (namely
the geometry reference frame unless defined differently with BEAMAXES).
This value is available in COMMON BEAMCM as variable VBEAM. It can be used or modified in
subroutine SOURCE (p. 363) if command SOURCE (p. 228) is present in input.
Default = 0.0
SDUM = NEGATIVE means that the direction cosine with respect to z-axis is negative.
The value of the direction cosine with respect to the z-axis can be overridden in user routine
SOURCE by assigning a value to variable WBEAM in COMMON BEAMCM (make sure that the three
cosines are properly normalised so that the sum of their squares is 1.0 in double precision! )
Default : beam directed in the positive z-direction
The command defines a spatially extended source shaped as a spherical shell. The centre x,y,z of the outer
and of the inner sphere, as well as the particle direction, must be defined by another BEAMPOSit command.
The particle angular distribution, or lack of it, is defined by the BEAM card.
Default : 0.0
WHAT(2) > 0.0: radius in cm of the outer sphere defining the shell
= 0.0: ignored
< 0.0: resets to default
Default : 1.0
The command defines a spatially extended source shaped as a cylindrical shell with the height of the outer
and of the inner cylinder parallel to the z axis of the beam reference frame (namely the geometry reference
frame unless defined differently with BEAMAXES). The outer and the inner cylinder are centred at a x,y,z
point defined by another BEAMPOSit command, which also sets the particle direction by means of a SDUM
blank or = NEGATIVE. The particle angular distribution, or lack of it, is defined by the BEAM card.
WHAT(2) > 0.0: radius in cm of the outer cylinder defining the shell
= 0.0: ignored
< 0.0: resets to default
Default : 1.0
WHAT(4) > 0.0: height in cm of the outer cylinder defining the shell
= 0.0: ignored
< 0.0: resets to default
Default : 1.0
The command defines a spatially extended source shaped as a Cartesian shell with the edges parallel to the
axes of the reference frame (namely the geometry reference frame unless defined differently with BEAMAXES).
The outer and the inner parallelepiped are centred at a x,y,z point defined by another BEAMPOSit command,
which also sets the particle direction by means of a SDUM blank or = NEGATIVE. The particle angular
distribution, or lack of it, is defined by the BEAM card.
WHAT(1) ≥ 0.0: length in cm of the x side of the inner parallelepiped defining the shell
< 0.0: resets to default
Default : 0.0
78 BEAMPOSit
WHAT(2) > 0.0: length in cm of the x side of the outer parallelepiped defining the shell
= 0.0: ignored
< 0.0: resets to default
Default : 1.0
WHAT(3) ≥ 0.0: length in cm of the y side of the inner parallelepiped defining the shell
< 0.0: resets to default
Default : 0.0
WHAT(4) > 0.0: length in cm of the y side of the outer parallelepiped defining the shell
= 0.0: ignored
< 0.0: resets to default
Default : 1.0
WHAT(5) ≥ 0.0: length in cm of the z side of the inner parallelepiped defining the shell
< 0.0: resets to default
Default : 0.0
WHAT(6) > 0.0: length in cm of the z side of the outer parallelepiped defining the shell
= 0.0: ignored
The command defines a source distribution on a spherical surface, centred at the x,y,z point defined by
another BEAMPOSit command with SDUM blank or = NEGATIVE, such as to produce a uniform and isotropic
fluence within the sphere. The value of the produced fluence will be 1/(πR2 ) cm−2
Default (option BEAMPOSit not requested): beam starting at point 0.0, 0.0, 0.0 in the z-direction
Notes
1. To take full advantage of some tracking optimisation features, it is often a good idea to create a buffer vacuum
region containing the whole geometry, which must itself be contained within the external (mandatory) black
hole region. It is then suggested that the beam impact point be chosen in vacuum, slightly upstream of the
actual one on a material boundary. As a general rule, anyway, it is recommended to never select the starting
point exactly on a boundary.
2. The beam spot coordinates and the beam director cosines as defined with the BEAMPOSit card are available
to user routines with names XBEAM, YBEAM, ZBEAM and UBEAM, VBEAM, WBEAM respectively. These variables,
as well as those defining other beam properties, are in COMMON BEAMCM which can be accessed with the INCLUDE
file (BEAMCM).
Input Commands 79
3. Beam divergence and transversal profile defined by option BEAM (p. 71), as well as polarisation defined by
option POLARIZAti (p. 212), are meaningful only if the beam direction is along the positive z-axis, unless a
command BEAMAXES is issued to establish a beam reference frame different from the geometry frame (see
p. 74).
4. When an isotropic source is defined (by setting command BEAM with WHAT(3) > 2000π in option BEAM),
any cosines defined by option BEAMPOSit become meaningless, although their values are still reported on
standard output.
Examples:
* A beam parallel to the x-axis starting at a point of
* coordinates -0.1, 5.0, 5.0 :
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...
BEAMPOS -0.1 5.0 5.0 1.0 0.0 0.0
7.7 BIASING
Biases the multiplicity of secondaries (only for hadron, heavy ion or
muon/photon nuclear interactions) on a region by region basis.
Sets importance sampling (Russian Roulette/splitting) at boundary crossing
by region and by particle.
The meaning of WHAT(1). . . WHAT(6) and SDUM is different depending on the sign of WHAT(1):
If WHAT(1) ≥ 0.0:
WHAT(2) = RR (or splitting) factor by which the average number of secondaries produced in a collision
should be reduced (or increased). Meaningful only for hadron, heavy ion, or muon/photon
nuclear interactions.
This value can be overridden in the user routine UBSSET (p. 370) by assigning a value to
variable RRHADR.
Default = 1.0
WHAT(4) = lower bound (or corresponding name) of the region indices with importance equal to WHAT(3)
and/or with multiplicity biasing factor equal to WHAT(2)
(“From region WHAT(4). . . ”)
Default = 2.0
WHAT(5) = upper bound (or corresponding name) of the region indices with importance equal to WHAT(3)
and/or with multiplicity biasing factor equal to WHAT(2)
(“. . . to region WHAT(5). . . ”)
Default = WHAT(4)
WHAT(3) = lower bound (or corresponding name) of the particle numbers to which the indicated modi-
fying parameter applies
(“From particle WHAT(3). . . ”)
Default = 1.0
WHAT(4) = upper bound (or corresponding name) of the particle numbers to which the indicated modi-
fying parameter applies
( “. . . to particle WHAT(4). . . ”)
Default = WHAT(3) if WHAT(3) > 0.0, all particles otherwise
SDUM = PRIMARY: importance biasing is applied also to primary particles (cancels any previous
NOPRIMARy request)
= NOPRIMARy: importance biasing is applied only to secondaries
Default = PRIMARY
WARNING
Notes
1. WHAT(2), with WHAT(1) ≥ 0.0, governs the application of Russian Roulette (or splitting) at hadronic colli-
sions, in order to achieve a reduction (resp. an increase) of the multiplicity of secondaries.
The same secondary is loaded onto the particle stack for further transport 0, 1 or any number of times depending
on a random choice, such that on average the requested multiplicity reduction (or increase) is achieved. The
weight of the stacked particles is automatically adjusted in order to account for the bias thus introduced.
If Russian Roulette has been requested, the reduction will not affect the leading particle, which will always be
retained, with unmodified weight. Also, no RR is performed when the number of secondaries is less than 3.
On the contrary, there are no such limitations for splitting (multiplicity increase).
There is some analogy with leading particle biasing as performed for electrons and photons with option
EMF–BIAS , and for hadrons in codes like Casim [203].
2. WHAT(3), with WHAT(1) ≥ 0.0, governs RR/splitting at boundary crossing. The number of particles of
the selected type crossing a given boundary is reduced/increased on average by a factor equal to the ratio
of the importances on either side of the boundary. What is relevant are the relative importances of adjacent
82 BIASING
regions, not their absolute values. As a guideline, in shielding and, in general, strong attenuation problems,
the importance of a region should be about inversely proportional to the corresponding attenuation factor
(absorption plus distance attenuation). This would exactly compensate the dilution of particle density leading
to a particle population approximately uniform in space. In some cases, however, when the user is interested
in improving statistics only in a limited portion of space, a uniform population density is not desirable, but it
is convenient to set importances so as to increase particle densities in a particular direction.
3. Different importances can be given to the same region for different particles, using the particle-dependent
modifying factor M which can be defined setting WHAT(1) < 0.0.
The modifying parameter M (WHAT(2), with WHAT(1) > 0.0) works as follows:
At a boundary crossing, let us call I1 the importance of the upstream region, and I2 that of the downstream
region.
– If I2 < I1 , Russian Roulette will be played.
Without any modifying factor, the chance of particle survival is I2 /I1 .
For 0.0 ≤ M ≤ 1.0, the survival chance is modified to:
1.0 − M × (1.0 − I2 /I1 )
It can be seen that a value M = 0.0 resets the chance of survival to 1.0, namely inhibits Russian
Roulette biasing.
A value M = 1.0 leaves the survival chance unmodified, while any value between 0.0 and 1.0 increases
the probability of survival with respect to the basic setting.
For M ≥ 1.0, the survival chance is modified to:
I2 /(M × I1 )
So, a value larger than 1.0 decreases the probability of survival with respect to the basic setting.
– If I2 > I1 , there will be splitting.
Without any modifying factor, the number of particles is increased on average by a factor I2 /I1 .
With the modifying factor, the number of particles is increased instead by:
1.0 + M × (I2 /I1 − 1.0)
It can be seen that a value M = 0.0 resets the splitting factor to 1.0, namely inhibits splitting.
A value M = 1.0 leaves the number of particles unmodified; a value between 0.0 and 1.0 decreases the
amount of splitting with respect to the basic setting; a value > 1.0 increases the amount of splitting.
Hint: One of the most common uses of the modifying factor is to play Russian Roulette/splitting only for some
selected particles: one does that by inhibiting biasing for all other particles, i.e., setting = 0.0 the modifying
factor M (WHAT(2), with WHAT(1) < 0.0).
4. In the most general case, increasing a region’s importance leads to an increased particle “traffic” through that
region and consequently to a better scoring statistics in regions “beyond”. However, it should be avoided to
have relatively large importances in scoring regions compared with those in adjacent ones to avoid correlated
tallies. If that happens, the scoring statistics might look only apparently good. It must be avoided also to have
too different importances in adjacent zones: the best biasing has to be done gently, without forcing and in a
way as continuous as possible.
5. All these biasing techniques are intended to improve statistics in some parts of phase space at the expenses of
the other parts. Biased runs in particular can neither accelerate convergenceindexconvergence in all regions,
nor reproduce natural fluctuations and correlations. Do not bias unless you know what you are doing!
6. Advice: When choosing the multiplicity reduction option of BIASING, or any other biasing option which can
introduce weight fluctuations in a given region, it is suggested to set also a weight window (cards WW–FACTOr,
p. 270 and WW–THRESh, p. 275) in order to avoid too large fluctuations in weight. The window must be
consistent with the other weight-modifying options, i.e. it must be approximately centred on the average value
of the weight expected in the region in question. If necessary, set SDUM = PRINT to get such information.
In case no window has been set, the code still keeps weights under control (but only those of low-energy
neutrons) by imposing a maximum deviation from a central value. This reference level is usually equal to the
inverse of the neutron importance in the region in question. However, since for technical reasons in Fluka
allowed importance values range only from 0.0001 to 100000.0, the user can multiply all the importances by a
factor, only for the purpose of calculating the reference weight level, by means of option WW–PROFIle (p. 273).
If the only biasing is via region importances set by WHAT(3), only limited fluctuations arise (all particles of a
given kind have about the same weight in the same region), and no window is needed.
7. Importance biasing cannot be made by user routine USIMBS and by setting region importances at the same
time.
Input Commands 83
7.8 COMPOUND
Defines a compound, alloy or mixture, made of several materials, or even a
mixture of different isotopes
No default
In a similar way, WHAT(3) and WHAT(4) refer to the second material in the compound, WHAT(5) and
WHAT(6) to the third one.
For more than three materials in the same compound, add as many COMPOUND cards with the same SDUM
name as needed (but the maximum number of components per compound is 80, and the maximum total
number of components is 2400).
Notes
1. Option COMPOUND must always be used in conjunction with a MATERIAL card having the same SDUM
name (see MATERIAL , p. 163). MATERIAL cards used for this purpose provide the density of the compound,
its material number and name (WHAT(1) and WHAT(2) of the MATERIAL option, namely atomic and mass
number, are ignored).
2. The order of MATERIAL and COMPOUND cards is irrelevant.
3. The atom (or molecule) content, mass fraction or volume fraction need only to be given on a relative basis
(normalisation is done automatically by the program).
4. Partial pressures of an (ideal) gas are equivalent to molecule fractions and also to volume fractions.
5. If a compound is defined by volume fractions of the components (either elements or compounds themselves —
see Note 8 for recursive definitions), Fluka internally calculates the atomic densities of each component using
the densities indicated in the respective MATERIAL cards: in this case, therefore, (and only in this case) it is
important that these correspond to the actual densities.
6. Isotopic compositions other than natural can be defined by the COMPOUND option too.
Input Commands 85
7. When using the LOW–NEUT option (p. 160) (explicitly or by default set by the DEFAULTS option (p. 92)),
a special data set containing low-energy neutron cross sections for each material used must be available. The
data sets are combined in a single file, delivered with the Fluka program (logical input unit LUNXSC, = 9).
Each low-energy neutron data set is identified either by name (if equal to a Fluka name and unique or first
with that name), or/and by one or more identifiers given with a card LOW–MAT (p. 158) when necessary to
remove ambiguity.
In the case of a composite material defined by a COMPOUND option, two possibilities are allowed (see
LOW–MAT):
(a) to associate the Fluka material with a pre-mixed neutron data set. In this case interactions take place
with individual nuclei at high energy, while average cross sections are used for low-energy neutrons. Note
that no pre-mixed neutron data set is yet available (at the moment the standard sets contain pure elements
or isotopes only).
(b) to associate the Fluka material with several elemental neutron data sets (one per component element).
In this case both high-energy and low-energy neutron interactions take place with individual nuclei. This
is the only possibility at present but it may change in the future.
8. Recursion is allowed, i.e., the components of a composite material can be composite materials. The depth of
recursion is only limited by the size of the internal arrays (in case of overflow a message is issued and the
job is terminated). Different levels of recursion can use different units in the definition of the component
fractions (atoms, mass or volume fractions). Note, however, that if a compound is put together from different
composite molecules, the atomic and molecular fractions have to be given without normalisation (use the
chemical formulae directly).
What follows is an example (for a number-based input) of a simple compound BOOZE containing 50% in weight
of water and 50% of ethanol.
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
MATERIAL 1.0 0.0 .0000899 3.0 0.0 0.0 HYDROGEN
MATERIAL 6.0 0.0 2.0 4.0 0.0 0.0 CARBON
MATERIAL 8.0 0.0 0.00143 5.0 0.0 0.0 OXYGEN
MATERIAL 0.0 0.0 1.0 20.0 0.0 0.0 WATER
MATERIAL 0.0 0.0 0.7907 7.0 0.0 0.0 ETHANOL
MATERIAL 0.0 0.0 0.9155 8.0 0.0 0.0 BOOZE
COMPOUND 2.0 3.0 1.0 5.0 0.0 0.0 WATER
COMPOUND 2.0 4.0 6.0 3.0 1.0 5.0 ETHANOL
COMPOUND -50.0 20.0 -50.0 7.0 0.0 0.0 BOOZE
* Note that in the above example materials 4, 5, 7 and 8 have been defined
* overriding the default FLUKA material numbers. This is only allowed in
* an explicitly number-based input, declared as such with WHAT(4) = 4.0 in
* command GLOBAL,
Example of how COMPOUND is commonly used to define a mixture (concrete). In a number-based input:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
* definition of material 27 (concrete) as compound: H (1%), C(0.1%),
* O(52.9107%), Na(1.6%), Mg(0.2%), Al(3.3872%), Si(33.7021%), K(1.3%),
* Ca(4.4%), Fe(1.4%)
MATERIAL 19.0 0.0 0.862 26.0 0.0 0.0 POTASSIU
MATERIAL 0.0 0.0 2.35 27.0 0.0 0. CONCRETE
COMPOUND -0.01 3.0 -0.001 6.0 -0.529107 8. CONCRETE
86 COMPOUND
7.9 CORRFACT
allows to alter material density for dE/dx and nuclear processes on a region-
by-region basis
WHAT(1) ≥ 0.0: density scaling factor for charged particle ionisation processes (dE/dx, delta ray pro-
duction, Møller and Bhabha scattering)
= 0.0: ignored
-2.0...0.0: |WHAT(1)| assumed relative to WHAT(2), the final density correction factor
will be |WHAT(1)| X WHAT(2)
≤ -2.0: reset to default
Default : 1.0
= 0.0: ignored
Default : 1.0
WHAT(4) : lower index bound (or corresponding name) of regions where the scaling factors shall apply
(“From region WHAT(4). . . ”)
Default : 2.0
WHAT(5) : upper index bound (or corresponding name) of regions where the scaling factors shall apply
(“. . . to region WHAT(5). . . ”)
Default = WHAT(4)
Default (option CORRFACT not requested): no density scaling factors are applied
Note
1. Option CORRFACT is mainly used in connection with voxel geometries derived from a CT scan, where particle
transport is done often in an equivalent material (e.g., water), but accounting for the density variations provided
by scan at the voxel level. While this approach is reliable for what concerns ionisation, other reactions, which
do not scale with density, must be simulated for the actual material composition.
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...
* Multiply density by a 0.85 factor for what concerns atomic processes
* in regions 7, 8, 9, 10, 11, 12
CORRFACT 0.85 0.0 0.0 7.0 12.
The same example, in a name-based input, supposing that the geometry is made of 12 regions:
CORRFACT 0.85 0.0 0.0 The7thRg @LASTREG
* Note the use of the name @LASTREG to indicate the maximum number of regions
* in the problem
Input Commands 89
7.10 DCYSCORE
Associates selected scoring detectors of given estimator type with user-
defined decay times or with combined prompt-decay particle scoring. In the
first case, see warning in Note 1 below. Also needed for scoring when the
source is a radioactive isotope.
WHAT(1) > 0.0: cooling time index to be associated with the detector(s) of estimator type defined by
SDUM and with number (or name) corresponding to WHAT(4)–WHAT(6) (see Note 2
below)
Only allowed when decay is calculated in “activation study” non-analogue mode (see
Note 1 of RADDECAY and Note 5 below).
= --1.0 : if option RADDECAY has been requested with WHAT(1) > 1.0, i.e., for radioactive
decays activated in semi-analogue mode, the detectors defined by WHAT(4)–WHAT(6)
will score both prompt and radioactive decay particles. See Note 6, and 7 for the case
where the source has been defined as a radioactive isotope.
= 0.0 : no scoring
Default = 0.0
WHAT(4) : lower index bound (or corresponding name) of detectors of type SDUM associated with the
specified cooling time (if WHAT(1) > 0.) or with combined prompt-decay particle scoring (if
WHAT(1) = –1.)
(“From detector WHAT(4) of estimator type SDUM. . . ”)
Default = 1.0
WHAT(5) : upper index bound (or corresponding name) of detectors of type SDUM associated with the
specified cooling time or with combined prompt-decay particle scoring
(“. . . to detector WHAT(5) of estimator type SDUM. . . ”)
Default = WHAT(4)
SDUM : identifies the kind of estimator under consideration: EVENTBIN, RESNUCLEi, USRBDX, USRBIN,
USRCOLL, USRTRACK, USRYIELD
Default : no default!
Default (option DCYSCORE not requested): no particles originated from radioactive decay are scored,
in whatever mode decay is being calculated (see Notes 5, 6 and 7 below).
Notes
1. Warning: when the DCYSCORE option is applied to a detector with WHAT(1) > 0.0, all quantities are
expressed per unit time (in seconds). For instance, the RESNUCLEi estimator will output Bq, dose estimators
will provide dose rate, etc..
If WHAT(1) = –1., all quantities are normalized per unit primary weight, or per decay if the source has been
defined as a radioactive isotope.
2. The cooling time index indicated by WHAT(1) > 0. must be one of those defined by means of option DCYTIMES.
(p. 91)
90 DCYSCORE
3. USRBIN and EVENTBIN detectors are counted together (they belong to the same index sequence), and so are
USRTRACK and USRCOLL detectors
4. The scoring of decay radiation in “activation study” non-analogue mode (WHAT(1) > 0.0, see Note 1 of
RADDECAY) is different from all the rest: decay particles are transported once, and a factor is applied at
scoring time depending on the decay time associated to each detector.
This factor accounts for the buildup and decay of the parent nucleus at the specified decay time. This factor
is not available in user routines such as FLUSCW and COMSCW (Sec. 13.2.6,13.2.2)
5. If radioactive decay has been requested in “activation study” mode (command RADDECAY with WHAT(1) = 1)
particles originated from radioactive decay are scored at cooling times requested with DCYTIMES. Command
DCYSCORE, called with WHAT(1) > 0.0, is used to associate a detector of type SDUM to a particular cooling
time. If DCYSCORE is not present, no particles originated from radioactive decay can be scored.
6. If radioactive decay has been requested in “semi-analogue” mode (Monte Carlo sampling: command
RADDECAY with WHAT(1) > 1.0), particles originated from radioactive decay are scored by the selected
detectors of type SDUM together with prompt particles, provided DCYSCORE is issued with WHAT(1) = –1.0.
If DCYSCORE is not present, no particles originated from radioactive decay can be scored, and any detector
will score only prompt particles.
7. If the source has been defined as a radioactive isotope (see command BEAM with SDUM = ISOTOPE), the
radioactive decay must be activated in semi-analogue mode (see Note 1 of RADDECAY), and therefore WHAT(1)
of DCYSCORE must be = –1.0. This can be considered as a special case of the situation described in Note 6,
where only decay particles are present, and no prompt particles. Also in this case, if DCYSCORE is not present
no particles originated from radioactive decay can be scored. And since no prompt particles are present, no
scoring at all can take place.
7.11 DCYTIMES
defines decay times for radioactive product scoring
Option DCYTIMES defines decay times after irradiations at which selected quantities (for instance residual
dose) are scored.
WHAT(1) : cooling time (in s) after the irradiation end, to be associated to a scoring detector (see Note 1
below)
>=< 0.0 : a new decay time is added with delay WHAT(1)
Notes
1. Each cooling time is assigned an index, following the order in which it has been input. This index can be used
in option DCYSCORE to assign that particular cooling time to one or more scoring detectors.
2. Multiple cards can be given, up to the desired number of decay times. All decay times are counted from the
end of the last irradiation period as defined by the IRRPROFIle command. A null decay time activates scoring
exactly at the end of irradiation. A negative decay time is admitted: scoring is performed at the chosen time
“during irradiation”
Example:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
DCYTIMES 10. 30. 3600. 43200. 86400. 172800.
DCYTIMES 2592000. 31557600.
* Eight different cooling times have been defined, each with an index
* corresponding to its input order: cooling time no. 1 is 10 s, no. 2 is 30 s,
* and those from no. 3 to 8 are respectively 1 h, 1/2 d, 1 d, 2 d, 30 d, 1 y
92 DEFAULTS
7.12 DEFAULTS
Sets Fluka defaults suitable for a specified kind of problems. Starting
from Fluka99.5 (June 2000) the standard defaults are those described un-
der NEW–DEFAults below. That is, if no DEFAULTS card is issued the code
behaves as if a card with NEW–DEFAults was given.
CALORIMEtry
• Rayleigh scattering and inelastic form factor corrections to Compton scattering and Compton profiles
activated (no EMFRAY needed)
• Detailed photoelectric edge treatment and fluorescence photons activated (no EMFFLUO needed)
• Low-energy neutron transport on dow to thermal energies included (no LOW–NEUT needed). High energy
neutron threshold at 20 MeV.
• Fully analogue absorption for low-energy neutrons
• Particle transport threshold set at 1 × mpart /mprot MeV, except for neutrons (1 × 10−5 eV) and
(anti)neutrinos (0, but they are discarded by default anyway)
• Multiple scattering threshold at minimum allowed energy, for both primary and secondary charged
particles
• Delta ray production on with threshold 100 keV (see option DELTARAY)
• Restricted ionisation fluctuations on, for both hadrons/muons and EM particles (see option IONFLUCT)
• Fraction of the kinetic energy to be lost in a step set at 0.08, number of dp/dx tabulation points set at
80 (see options DELTARAY, EMFFIX, FLUKAFIX)
• Heavy particle e+ e− pair production activated with full explicit production (with the minimum threshold
= 2 me )
Input Commands 93
• Heavy particle bremsstrahlung activated with explicit photon production above 300 keV
• Muon photonuclear interactions activated with explicit generation of secondaries
• Heavy fragment transport activated
EET/TRANsmut
• Low-energy neutron transport on down to thermal energies included, (high energy neutron threshold at
20 MeV)
• Non-analogue absorption for low-energy neutrons with probability 0.95 for the last (thermal) groups
• Particle transport threshold set at 1 MeV, except neutrons (1 × 10−5 eV) and (anti)neutrinos (0, but
they are discarded by default anyway)
• Multiple scattering threshold for primary and secondary charged particles lowered to 10 and 20 MeV
respectively
• Unrestricted ionisation fluctuations on, for both hadrons/muons and EM particles (if requested) (see
option IONFLUCT)
• Both explicit and continuous heavy particle bremsstrahlung and pair production inhibited
EM–CASCAde
HADROTHErapy
• EMF on
• Inelastic form factor corrections to Compton scattering and Compton profiles activated
• Low-energy neutron transport on down to thermal energies included, no need for option LOW–NEUT
(high energy neutron threshold at 20 MeV)
• Fully analogue absorption for low-energy neutrons
• Particle transport threshold set at 100 keV, except for neutrons (1 × 10−5 eV) and (anti)neutrinos (0,
but they are discarded by default anyway)
• Multiple scattering threshold at minimum allowed energy, for both primary and secondary charged
particles
• Delta ray production on with threshold 100 keV (see option DELTARAY)
• Restricted ionisation fluctuations on, for both hadrons/muons and EM particles (see option IONFLUCT)
• Tabulation ratio for hadron/muon dp/dx set at 1.03, fraction of the kinetic energy to be lost in a step
set at 0.02 (see options DELTARAY, EMFFIX, FLUKAFIX)
ICARUS
94 DEFAULTS
• EMF on
• Rayleigh scattering and inelastic form factor corrections to Compton scattering and Compton profiles
activated (no EMFRAY needed)
• Detailed photoelectric edge treatment and fluorescence photons activated (no EMFFLUO needed)
• Low-energy neutron transport on down to thermal energies included, (high energy neutron threshold at
20 MeV)
• Fully analogue absorption for low-energy neutrons
• Particle transport threshold set at 100 keV, except neutrons (1 × 10−5 eV) and (anti)neutrinos (0, but
they are discarded by default anyway)
• Multiple scattering threshold at minimum allowed energy, for both primary and secondary charged
particles
• Delta ray production on with threshold 100 keV (see option DELTARAY)
• Restricted ionisation fluctuations on, for both hadrons/muons and EM particles (see option IONFLUCT)
• Tabulation ratio for hadron/muon dp/dx set at 1.04, fraction of the kinetic energy to be lost in a step
set at 0.05, number of dp/dx tabulation points set at 80 (see options DELTARAY, EMFFIX, FLUKAFIX)
• Heavy particle e+ e− pair production activated with full explicit production (with the minimum threshold
= 2 me )
• Heavy particle bremsstrahlung activated with explicit photon production above 300 keV
• Muon photonuclear interactions activated with explicit generation of secondaries
• Heavy fragment transport activated
NEUTRONS
• Low-energy neutron transport on down to thermal energies included, no need for LOW–NEUT (high
energy neutron threshold at 20 MeV)
• Non analogue absorption for low-energy neutrons with probability 0.95 for the last (thermal) groups
• Both explicit and continuous heavy particle bremsstrahlung and pair production inhibited
NEW–DEFAults (standard defaults active even if the DEFAULTS card is not present)
• EMF on, with electron and photon transport thresholds to be set using EMFCUT command
• Inelastic form factor corrections to Compton scattering activated (no need for EMFRAY)
• Low-energy neutron transport on down to thermal energies included, (no need for LOW–NEUT). The
neutron high energy threshold is set at 20 MeV.
• Non analogue absorption for low-energy neutrons with probability 0.95 for the last (thermal) groups
• Particle transport threshold set at 10 MeV, except for neutrons (1 × 10−5 eV), and (anti)neutrinos (0,
but they are discarded by default anyway)
• Multiple scattering threshold for secondary charged particles lowered to 20 MeV (equal to that of the
primary ones)
• Delta ray production on with threshold 1 MeV (see option DELTARAY)
• Restricted ionisation fluctuations on, for both hadrons/muons and EM particles (see option IONFLUCT)
• Heavy particle e+ e− pair production activated with full explicit production (with the minimum threshold
= 2 me )
Input Commands 95
• Heavy particle bremsstrahlung activated with explicit photon production above 1 MeV
• Muon photonuclear interactions activated with explicit generation of secondaries
PRECISIOn
• EMF on
• Rayleigh scattering and inelastic form factor corrections to Compton scattering and Compton profiles
activated
• Detailed photoelectric edge treatment and fluorescence photons activated
• Low-energy neutron transport on down to thermal energies included (high energy neutron threshold at
20 MeV)
• Fully analogue absorption for low-energy neutrons
• Particle transport threshold set at 100 keV, except neutrons (1 × 10−5 eV) and (anti)neutrinos (0, but
they are discarded by default anyway)
• Multiple scattering threshold at minimum allowed energy, for both primary and secondary charged
particles
• Delta ray production on with threshold 100 keV (see option DELTARAY)
• Restricted ionisation fluctuations on, for both hadrons/muons and EM particles (see option IONFLUCT)
• Tabulation ratio for hadron/muon dp/dx set at 1.04, fraction of the kinetic energy to be lost in a step
set at 0.05, number of dp/dx tabulation points set at 80 (see options DELTARAY, EMFFIX, FLUKAFIX)
• Heavy particle e+ e− pair production activated with full explicit production (with the minimum threshold
= 2 me )
• Heavy particle bremsstrahlung activated with explicit photon production above 300 keV
• Muon photonuclear interactions activated with explicit generation of secondaries
• Heavy fragment transport activated
SHIELDINg
• Low-energy neutron transport on down to thermal energies included (the neutron high energy threshold
is set at 20 MeV)
• Non analogue absorption for low-energy neutrons with probability 0.95 for the last (thermal) group
• Particle transport threshold set at 10 MeV, except neutrons (1 × 10−5 eV) and (anti)neutrinos (0, but
they are discarded by default anyway)
• Multiple scattering threshold for secondary charged particles lowered to 20 MeV (= primary ones)
• Both explicit and continuous heavy particle bremsstrahlung and pair production inhibited
• EMF off!!! This default is meant for simple hadron shielding only!
Notes
1. If an option does not appear in input, Fluka provides default parameter values in most cases. Standard
defaults are also applied when the option is present but not all its WHAT and SDUM parameters have been
defined explicitly by the user. However, some types of problems are better handled using different defaults.
Option DEFAULTS allows to override the standard ones with others, tuned to a specific class of transport
problems.
96 DEFAULTS
The present set of defaults (valid if no DEFAULTS card is issued) is equivalent to that set by
SDUM = NEW–DEFAults.
2. Important! Option DEFAULTS must be issued at the very beginning of input. It can be preceded only by a
GLOBAL card and by TITLE. This is one of the rare cases, like GLOBAL, MAT–PROP and PLOTGEOM, where
sequential order of input cards is of importance in Fluka (see Chap. 6).
3. The name of the SHIELDINg default refers to simple calculations for proton accelerators, where the electromag-
netic component can be neglected. It is not applicable to electron accelerator shielding or any other shielding
problem where the gamma component is important.
4. The responsibility of choosing reasonable defaults, compatible with the rest of input, is left to the user. In
particular, choosing the defaults corresponding to pure EM cascade or to pure low-energy neutron problems has
the effect of turning off all initialisations related to the hadronic generators. This will save a considerable time
at the beginning of the run, but will lead to a crash if a hadron generator is called because of some other input
option. In particular, SDUM = EM–CASCA is incompatible with option PHOTONUC and with beam/source
particles different from PHOTON, ELECTRON and POSITRON; and SDUM = NEUTRONS is incompatible
with option EMF, with any beam particle different from NEUTRON and with energies higher than 20 MeV.
On the other hand, it is possible to override some of the defaults, in particular the various thresholds, by issuing
the corresponding command after DEFAULTS (PART–THR, EMFCUT, DELTARAY, etc.)
Example:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....
DEFAULTS 0.0 0.0 0.0 0.0 0.0 0.0 EM-CASCA
* The above declaration refers to a problem where only electrons, positrons
* and photons are transported.
SDUM values
Input Commands
g
A
NS
L
ME
IN
IO
SCA
RAN
issiTnS
DEF
OTH
Command
US
m
R
TRO
CIS
/T
ORI
R
ELD
–CA
d
A
D
C
EET
EM
HAD
ICA
NEU
NEW
PR E
SHI
carEFAU
DELTARAY WHAT(1) 0.001 0.0001 -1. -1. 0.0001 0.0001 -1. 0.001 0.0001 -1.
WHAT(2) 50. 80. 50. 50. 50. 80. 50. 50. 80. 50.
WHAT(3) 1.15 1.15 1.15 1.15 1.03 1.04 1.15 1.15 1.04 1.15
EMF ON ON OFF ON ON ON OFF ON ON OFF
EMFFLUO WHAT(1) 0. 1. 0. 1. 0. 1. 0. 0. 1. 0.
EMFRAY WHAT(1) 3. 1. 0. 1. 3. 1. 0. 3. 1. 0.
EVENTYPE WHAT(3) 1. 1. 1. -2. 1. 1. -2. 1. 1. 1.
FLUKAFIX WHAT(1) 0.1 0.08 0.1 0.1 0.05 0.02 0.1 0.1 0.05 0.1
IONFLUCT WHAT(1) 1. 1. 1. -1. 1. 1. -1. 1. 1. -1.
WHAT(2) 1. 1. -1. 1. 1. 1. -1. 1. 1. -1.
LOW–BIAS WHAT(1) 0. 0. 0. N.A. 0. 0. 0. 0. 0. 0.
WHAT(2) 230. 261. 230. N.A. 230. 261. 261. 230. 261. 230.
WHAT(3) 0.85 N.A. 0.95 N.A. 0.95 N.A. N.A. 0.95 N.A. 0.95
LOW–NEUT ON ON ON OFF ON ON ON ON ON ON
WHAT(6) 0. 1. 0. 0. 0. 1. 1. 0. 1. 0.
MCSTHRES WHAT(1) -0.02 1. -0.01 -0.02 1. 1. -0.02 -0.02 1. -0.02
WHAT(2) -0.02 1. -0.02 -1. 1. 1. -1. -0.02 1. -0.02
PAIRBREM WHAT(1) 3. 3. -3. -3. 3. 3. -3. 3. 3. -3.
WHAT(2) 0. 0. N.A. N.A. -1. 0. N.A. 0. 0. N.A.
Table 7.1: Defaults defined by different SDUM values of the DEFAULTS command
WHAT(3) 0.001 0.0003 N.A. N.A. -1. 0.0003 N.A. 0.001 0.0003 N.A.
PART–THR WHAT(1) -0.010 m
−0.001 m -0.001 N.A. -0.0001 -0.0001 N.A. -0.010 -0.0001 -0.010
p
97
98 DELTARAY
7.13 DELTARAY
Activates delta ray production by muons and charged hadrons and controls
the accuracy of the dp/dx tabulations
WHAT(1) > 0.0: kinetic energy threshold (GeV) for delta ray production (discrete energy transfer).
Energy transfers lower than this energy are assumed to take place as continuous
energy losses
= 0.0: ignored
< 0.0: resets the default to infinite threshold, i.e., no delta ray production
Default = 0.001 (1 MeV) if option DEFAULTS (p. 92) is not used, or if it is used with SDUM
= NEW–DEFAults.
If DEFAULTS is used with SDUM = CALORIMEtry, HADROTHErapy, ICARUS or
PRECISIOn, the default is 0.0001 (100 keV).
If it is used with any other SDUM value, the default is -1.0 (continuous slowing down
approximation without production of delta rays)
WHAT(2) > 0.0: number of logarithmic intervals for dp/dx momentum loss tabulation
= 0.0: ignored
< 0.0: resets the default to 50.0
Default = 50.0 (this is the default if option DEFAULTS is not used, or is used with anything
but SDUM = CALORIMEtry, ICARUS or PRECISIOn).
With the latter, the default is 80.
See Note 2 for more details
WHAT(3) > 1.0: logarithmic width of dp/dx momentum loss tabulation intervals (ratio between upper
and lower interval limits)
0.0 ≤ WHAT(3) ≤ 1.0: ignored
≤ 0.0: resets the default to 1.15
Default = 1.15 (this is the default if option DEFAULTS is not used, or is used with any SDUM
value but HADROTHErapy, ICARUS or PRECISIOn).
If DEFAULTS is used with SDUM = ICARUS or PRECISIOn, the default is 1.04
With SDUM = HADROTHErapy the default is 1.03
See Note 2 for more details
WHAT(4) = lower index bound (or corresponding name) of materials where delta ray production or spec-
ified tabulation accuracy are requested
(“From material WHAT(4). . . ”)
Default = 3.0
WHAT(5) = upper index bound (or corresponding name) of materials where delta ray production or
specified tabulation accuracy are requested
(“. . . to material WHAT(5). . . ”)
Default = WHAT(4)
SDUM = PRINT : prints dE/dx tabulations for the given materials on standard output
(see Note 1)
= NOPRINT : resets to no printing a possible previous request for these materials
blank : ignored
Default = NOPRINT
Input Commands 99
Default (option DELTARAY not requested): the defaults depend on option DEFAULTS as explained above.
See also Note 9 and Table 7.1 at p. 97.
Notes
1. To calculate energy loss by charged particles, Fluka sets up and uses internally tables of momentum loss per
unit distance (dp/dx) rather than the more commonly used dE/dx. However, if requested to print those tables,
it outputs them as converted to dE/dx
2. The upper and lower limit of the dp/dx tabulations are determined by the options BEAM (p. 71) and PART–THR
(p. 196), or by the corresponding defaults. Therefore, either the number or the width of the intervals are
sufficient to define the tabulations completely. If both WHAT(2) and WHAT(3) are specified, or if the value of
both is defined implicitly by the chosen default, the most accurate of the two resulting tabulations is chosen.
3. The lower tabulation limit is the momentum of the charged particle which has the lowest transport threshold.
The upper limit corresponds to the maximum primary energy (as set by BEAM plus an additional amount
which is supposed to account for possible exoenergetic reactions, Fermi momentum and so on.
4. This option concerns only charged hadrons and muons. Delta rays produced by electrons and positrons are
always generated, provided their energy is larger than the production threshold defined by option EMFCUT.
5. Request of delta ray production is not alternative to that of ionisation fluctuations (see IONFLUCT, p. 145).
The two options, if not used at the same time, give similar results as far as transport and energy loss are
concerned, but their effect is very different concerning energy deposition: with the IONFLUCT option the
energy lost is sampled from a distribution but is deposited along the particle track, while DELTARAY, although
leading to similar fluctuations in energy loss, will deposit the energy along the delta electron tracks, sometimes
rather far from the primary trajectory. IONFLUCT can be used even without requesting the EMF option, while
when requesting DELTARAY the EMF card must also be present (or implicitly activated by the selected default
— see option DEFAULTS, p. 92) if transport of the generated electrons is desired.
6. Normally, the energy threshold for delta ray production should be higher than the electron energy transport
cutoff specified by option EMFCUT (p. 113) for discrete electron interactions. If it is not, the energy of the
delta electron produced is deposited on the spot. As explained above, this will result in correct energy loss
fluctuations but with the energy deposited along the particle track, a situation similar to that obtained with
IONFLUCT alone.
7. Note that Fluka makes sure that the threshold for delta ray production is not set much smaller than the
average ionisation potential.
8. Presently, DELTARAY can be used together with the IONFLUCT option with a threshold for delta rays chosen
by the user. As a result, energy losses larger than the threshold result in the production and transport of
delta electrons, while those smaller than the threshold will be sampled according to the correct fluctuation
distribution.
9. Here are the settings for delta ray production and dp/dx tabulations corresponding to available DEFAULTS
options:
ICARUS, PRECISIOn: threshold for delta ray production 100 keV; momentum loss tabulation with 80 loga-
rithmic intervals or 1.04 logarithmic width (whichever is more accurate)
CALORIMEtry: threshold for delta ray production 100 keV; momentum loss tabulation with 80 logarithmic
intervals or 1.15 logarithmic width
HADROTHErapy: threshold for delta ray production 100 keV; momentum loss tabulation with 50 logarithmic
intervals or 1.03 logarithmic width
NEW–DEFAults, or DEFAULTS missing: threshold for delta ray production 1 MeV; momentum loss tabulation
with 50 logarithmic intervals or 1.15 logarithmic width
Any other SDUM value: no delta ray production; momentum loss tabulation with 50 logarithmic intervals
or 1.15 logarithmic width
* In this example, delta rays with energies higher than 20 MeV (0.02 GeV)
* will be produced in materials 4 and 12; for the same materials,
* logarithmic intervals with a ratio of 1.05 between the upper and the
* lower limit of each interval are requested for the dp/dx tabulation. For
* all other materials with number between 3 and 18, delta rays are
* produced above 10 MeV and 30 intervals are used in the dp/dx tabulation.
* Tabulations of dE/dx are printed for all materials except 4 and 12.
The following is an example where a threshold of 500 keV is set for delta ray production in all materials:
DELTARAY 5.E-4 0.0 0.0 HYDROGEN @LASTMAT PRINT
Input Commands 101
7.14 DETECT
In the following, an “event” means energy deposited in one or more detector regions by one primary particle
and its descendants, i.e., between two successive calls to the FEEDER subroutine (case of an incident beam)
or to a user-written SOURCE subroutine (case of an extended source, or of a source read from a collision file
or sampled from a distribution).
A “signal” means energy deposited in one or more trigger regions by the same primary particle and descen-
dants (i.e., between the same calls).
The output of DETECT is a distribution of energy deposited per event in the region(s) making up the detector
in (anti-)coincidence with a signal larger than a given cutoff or threshold in the trigger region(s).
It is also possible to define detector combinations (NOT YET IMPLEMENTED!!!)
This option can extend over several sequential cards. The meaning of the parameters on the first card are:
If WHAT(1) = 0.0:
WHAT(2) = minimum total energy to be scored in the detector regions in one event, i.e., lower limit of
the event distribution
Default = 0.0
WHAT(3) = maximum total energy to be scored in the detector regions in one event, i.e., upper limit of
the event distribution
No default
WHAT(4) = cutoff energy for the signal (coincidence/anti-coincidence threshold). The energy deposi-
tion event is scored only if a total of more than WHAT(4) GeV are/aren’t correspondingly
deposited in the trigger regions
Default = 10−9 (= 1 eV)
WHAT(5) ≥ 0.0: the detector regions, taken together, must be considered in coincidence with the
trigger regions (also taken together)
< 0.0: the detector regions must be considered in anti-coincidence with the trigger regions
Default: anti-coincidence
WHAT(6) : region number or name not preceded by a minus sign: first region of the detector
region number or name preceded by a minus sign: first region of the trigger
(the other regions will be given with continuation cards, see below)
No default
SDUM = detector name (max. 10 characters) unless the character “ & ” is present
WHAT(2) – WHAT(6) = following regions (with sign). If not preceded by a minus sign, they are
considered detector regions, otherwise trigger regions
SDUM = “ & ” in any position in column 71 to 78 (or in the last field if free format is used)
102 DETECT
Note: if no trigger region is given (i.e., no region with negative sign), a simple event by event scoring
takes place
WHAT(2) : first detector to be considered for this combination, in coincidence if WHAT(2) is not preceded
by a minus sign, in anticoincidence otherwise
Default : ignored
WHAT(3) : second detector to be considered for this combination, in coincidence if WHAT(3) is not
preceded by a minus sign, in anti-coincidence otherwise
Default : ignored
WHAT(4) : third detector to be considered for this combination, in coincidence if WHAT(4) is not preceded
by a minus sign, in anti-coincidence otherwise
Default : ignored
WHAT(5) : fourth detector to be considered for this combination, in coincidence if WHAT(5) is not
preceded by a minus sign, in anti-coincidence otherwise
Default : ignored
WHAT(6) : fifth detector to be considered for this combination, in coincidence if WHAT(6) is not preceded
by a minus sign, in anti-coincidence otherwise
Default : ignored
SDUM = combination name (max. 10 characters) unless the character “ & ” is present
WHAT(2) – WHAT(6) = following detectors (with sign). If not preceded by a minus sign, they are
considered in coincidence, otherwise in anti-coincidence
SDUM = “ & ” in any position (or in the last field if free format is used)
Notes
1. Output from DETECT is written unformatted on logical unit 17. To recover the output, it is necessary to run
a Fortran program containing the following lines:
.........................................
CHARACTER*80 RUNTIT, RUNTIM*32, CHNAME*10
INTEGER*4 NCASE, NDET, NBIN, IV(1024)
REAL EMIN, EBIN, ECUT
.........................................
.........................................
READ(17) RUNTIT, RUNTIM, WEIPRI, NCASE
READ(17) NDET, CHNAME, NBIN, EMIN, EBIN, ECUT
READ(17) (IV(I), I = 1, NBIN)
.........................................
This is the meaning of variables read:
Input Commands 103
2. Important: option DETECT will give meaningful results only when Fluka is used in a completely analogue
mode, since correlations are destroyed by biasing. Thus, DETECT cannot be used together with any biasing
option or weight-changing facility. It is recommended for this purpose to issue a GLOBAL command with
WHAT(2) < 0.0 at the beginning of input (see GLOBAL, Sec 7.32).
A list of incompatible options is: BIASING, EMF–BIAS, LOW–BIAS, LAM–BIAS, WW–FACTOr, EMFCUT with
WHAT(3) < 0.0, EMF with WHAT(6) 6= 1.0, EXPTRANS, LOW–DOWN.
Example (number-based):
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....
DETECT 0.0 1.E-4 1.E-2 5.E-5 1.0 7.0 Coincid_1
DETECT 0.0 -9.0 -12.0 10.0 11.0 &
* The meaning of the above lines is the following:
* a "signal" equal to energy deposition in regions 7 + 10 + 11 will be
* scored if:
* 1) that signal is larger than 1.E-4 GeV and smaller than 0.01 GeV
* 2) at the same time (in coincidence) an energy deposition of at least
* 5.0E-5 GeV occurs in regions 9 + 12
* The output will give a signal (event) spectrum between the energy
* limits 1.0E-4 and 0.010 GeV, distributed over a fixed number of channels
* (1024 in the standard version).
7.15 DISCARD
WHAT(3) – WHAT(6) : id-number or name of particles to be discarded (see particle numbers and names
in Table 5.1, p. 43).
If one of the WHATs is preceded by a minus sign a previous corresponding DISCARD command
(explicit or by default) will be canceled.
When full heavy particle transport is activated (see IONTRANS, p. 148), discarding of heavies
can be performed setting the WHATs = (1000 + Kheavy ), with Kheavy = 3...6 (heavy ion
particle code):
3 = 2 H, 4 = 3 H, 5 = 3 He, 6 = 4 He, 7-12 = fission fragments
Except for fission fragments, the corresponding names can also be used.
Undiscarding heavies is obtained by setting WHATs = (1000 - Kheavy ), or by making the
corresponding names to be preceded by a minus sign.
The whole scheme is shown in the following table:
Discard Undiscard
2
H 1003 or DEUTERON 997 or -DEUTERON
3
H 1004 or TRITON 996 or -TRITON
3
He 1005 or 3-HELIUM 995 or -3-HELIUM
4
He 1006 or 4-HELIUM 994 or -4-HELIUM
fission fragments 1007-1012 993-988
No default
Default (option DISCARD not given): only neutrinos and antineutrinos are discarded by default. Set the
WHATs = -5.0, -6.0, -27.0, -28.0, -43.0, -44.0 or NEUTRIE, ANEUTRIE, NEUTRIM,
ANEUTRIM, NEUTRIT, ANEUTRIT in order to have them transported.
Notes
5. Warning: discarding the particles which propagate the hadronic cascade (neutrons, protons, pions) will lead
in general to unpredictable results.
Example (number-based):
Input Commands 105
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
DISCARD 3.0 4.0 7.0 10.0 11.0 23.0
* This example illustrates a typical situation where the use of DISCARD
* can considerably reduce the computing time: for instance in pure
* hadronic or neutron problems (hadron fluence calculations without interest
* in energy deposition). In this case electrons, positrons, photons, muons
* and pi0 do not contribute to the result and can be discarded
7.16 ELCFIELD
Defines an homogenous electric field
WHAT(1) = largest angle (in degrees) that a particle is allowed to travel in a single step
Default : 20◦
WHAT(2) = error of the boundary iteration (minimum accuracy accepted in determining a boundary
intersection)
Default : 0.01 cm
WHAT(3) = minimum step if the step is forced to be smaller due to a too large angle
Default : 0.1 cm
Note
7.17 EMF
Activates ElectroMagnetic Fluka: transport of electrons, positrons and photons
See also DEFAULTS, DELTARAY, EMF–BIAS, EMFCUT, EMFFIX, EMFFLUO, EMFRAY, MULSOPT, PHOTONUC
SDUM = EMF–OFF to switch off electron and photon transport. Useful with the new defaults where
EMF is on by default
Default : EMF on
Default (option EMF not requested): if option DEFAULTS is not used, or if it is used with
SDUM = NEW–DEFAults, CALORIMEtry, EM–CASCAde, HADROTHErapy, ICARUS or PRECISIOn,
electrons, positrons and photons are transported.
If DEFAULTS is used with SDUM = OLD–DEFAults, EET/TRANsmut, NEUTRONS, SHIELDINg or any-
thing else, electrons, positrons and photons are not transported (see Note 2). To avoid their energy
to be deposited at the point of production, it is generally recommended to discard those particles (see
Note 5).
See also Table 7.1 at p. 97.
Notes
1. Option EMF is used to request a detailed transport of electrons, positrons and photons. Even if the primary
particles are not photons or electrons, photons are created in high-energy hadron cascades, mainly as a product
of π 0 decay, but also during evaporation and fission of excited nuclei; and capture gamma-rays are generated
during low-energy neutron transport. Electrons can arise from muon decay or can be set in motion in knock-on
collisions by charged particles (delta rays)
2. If EMF has been turned off by overriding the default (by setting SDUM = EMF–OFF or by a DEFAULTS option
which switches off electron-photon transport, such as OLD–DEFAults, EET/TRANsmut, NEUTRONS, SHIELD-
ING, not accompanied by an explicit EMF request), such electrons, positrons and photons are not transported
and their energy is deposited on the spot at the point of creation unless those particles are DISCARDed (see
Note 5 below).
3. Of course, it is also mandatory to request option EMF (either explicitly or implicitly via option DEFAULTS,
p. 92) in any pure electron, positron or photon problem (i.e., with electrons, positrons or photons as primary
particles).
4. Using EMF without any biasing can lead to very large computing times, especially in problems of high primary
energy or with low-energy cutoffs. See in particular leading-particle biasing with EMF–BIAS (p. 108).
5. In case of a pure hadron or neutron problem (e.g., neutron activation calculation) it is recommended to
DISCARD electrons, positrons and photons (id-number 3, 4 and 7, see p. 104). In this case it is irrelevant
whether the EMF card is present or not. Discarding only electrons and positrons, but not photons, may also
be useful in some cases (for instance when calculating photon streaming in a duct).
6. An alternative is to set very large energy cutoffs for electrons and positrons (see EMFCUT, p. 113). That will
result in the electron energy being deposited at the point of photon interaction (kerma approximation, often
sufficient for transport of photons having an energy lower than a few MeV).
7. Hadron photoproduction is dealt with by option PHOTONUC (p. 198).
Example:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
EMF EMF-OFF
* This command must be issued without any WHAT parameter.
108 EMF–BIAS
7.18 EMF–BIAS
Sets electron and photon special biasing parameters, including leading parti-
cle biasing region by region, and mean free path biasing material by material
WHAT(1) > 0.0: Leading Particle Biasing (LPB) is activated. Which combination of leading particle
biasing is actually set up depends on the bit pattern of WHAT(1).
Let WHAT(1) be represented as:
20 b0 + 21 b1 + 22 b2 + 23 b3 + 24 b4 + 25 b5 + 26 b6 + 27 b7 + 28 b8 + 29 b9
Note that WHAT(1) = 1022 activates LPB for all physical effects (values larger than
1022 are converted to 1022)
< 0.0: leading particle biasing is switched off
= 0.0: ignored
WHAT(2) > 0.0: energy threshold below which leading particle biasing is played for electrons and
positrons (for electrons, such threshold refers to kinetic energy; for positrons, to total
energy plus rest mass energy)
< 0.0: resets any previously defined threshold to infinity (i.e., leading particle biasing is
played at all energies)
= 0.0: ignored
This value can be overridden in the user routine UBSSET (p. 370) by assigning a value
to variable ELPEMF
Default : leading particle biasing is played at all energies for electrons and positrons
WHAT(3) > 0.0: energy threshold below which leading particle biasing is played for photons
< 0.0: resets any previously defined threshold to infinity (i.e., leading particle biasing is
played at all energies)
= 0.0: ignored
This value can be overridden in the user routine UBSSET (p. 370) by assigning a value
to variable PLPEMF.
Default : leading particle biasing is played at all energies for photons
WHAT(4) = lower bound (or corresponding name) of the region indices where the selected leading particle
biasing has to be played
(“From region WHAT(4). . . ”)
Input Commands 109
Default = 2.0
WHAT(5) = upper bound (or corresponding name) of the region indices where the selected leading particle
biasing has to be played
(“. . . to region WHAT(5). . . ”)
Default = WHAT(4)
SDUM = LPBEMF (Leading Particle Biasing for EMF). This is the default, for other values of SDUM
see below.
This value can be overridden in the user routine UBSSET (p. 370) by assigning a value to
variable LPEMF.
WHAT(1) > 0.0 and < 1.0: the interaction mean free paths for all electron and positron electromag-
netic interactions (SDUM = LAMBEMF), or for electron/positron bremsstrahlung only
(SDUM = LAMBBREM) are reduced by a multiplying factor = WHAT(1)
= 0.0: ignored
< 0.0 or ≥ 1.0: resets to default (no mean free path biasing for electrons and positrons)
WHAT(2) > 0.0 and < 1.0: the interaction mean free paths for all photon electromagnetic interac-
tions (SDUM = LAMBEMF), or for Compton scattering only (SDUM = LAMBCOMP)
are reduced by a multiplying factor = WHAT(2)
= 0.0: ignored
< 0.0 or ≥ 1.0: resets to default (no mean free path biasing for photons)
WHAT(4) = lower bound (or corresponding name) of the indices of materials in which the indicated mean
free path biasing has to be applied
(“From material WHAT(4). . . ”)
Default = 3.0
WHAT(5) = upper bound (or corresponding name) of the indices of materials in which the indicated mean
free path biasing has to be applied
(“. . . to material WHAT(5). . . ”)
Default = WHAT(4)
SDUM = LAMBEMF (LAMbda Biasing for ElectroMagnetic Fluka): mean free path biasing is applied
to all electron, positron and photon interactions, and both the incident and the secondary
particle are assigned a reduced weight
110 EMF–BIAS
= LAMBCOMP (LAMbda Biasing for Compton interactions): mean free path biasing is applied
only to photon Compton effect, and both the incident photon and the secondary electron are
assigned a reduced weight
= LAMBBREM (LAMbda Biasing for BREMsstrahlung interactions): mean free path biasing is
applied only to electron and positron bremsstrahlung, and both the incident electron/positron
and the secondary photon are assigned a reduced weight
= LBRREMF (Lambda Biasing with Russian Roulette for ElectroMagnetic Fluka): mean free
path biasing is applied to all electron, positron and photon interactions, and the incident par-
ticle either is suppressed or survives with the same weight it had before collision, depending
on a random choice
= LBRRCOMP (Lambda Biasing with Russian Roulette for Compton interactions): mean free
path biasing is applied only to photon Compton effect, and the incident photon either is
suppressed or survives with the same weight it had before collision, depending on a random
choice
= LBRRBREM (Lambda Biasing with Russian Roulette for BREMsstrahlung interactions):
mean free path biasing is applied only to electron and positron bremsstrahlung, and the
incident electron/positron either is suppressed or survives with the same weight it had be-
fore collision, depending on a random choice
Default :LPBEMF (see above)
Default : (option EMF–BIAS not requested): none of the above biasings apply. Note, however, that
leading particle biasing can also be requested by option EMFCUT, p. 113 (not recommended).
Notes
1. Depending on the SDUM value, different kinds of biasing are applied to the secondary particles issued from
the reaction.
2. If SDUM = LPBEMF, the interaction point of electrons, positrons and photons is sampled analogically and
Leading Particle Biasing is applied to the secondary particles, in a manner similar to that provided by option
EMFCUT (p. 113).
However, Leading Particle Biasing with EMFCUT applies to all electromagnetic effects, while EMF–BIAS can
be tuned in detail for each type of electron and photon interactions.
3. With all other values of SDUM, the interaction point is sampled from an imposed (biased) exponential distri-
bution, in a manner similar to that provided by option LAM–BIAS for hadrons and muons (p. 150). Further
differences in SDUM values allow to restrict biasing to one specific type of interaction and/or to select different
treatments of the incident particle.
4. If SDUM = LAMBEMF, LAMBCOM, LAMBBREM, the weights of both the incident and the secondary particle
are adjusted according to the ratio between the biased and the physical interaction probability at the sampled
distance.
5. If SDUM = LBRREMF, LBRRCOM, LBRRBREM, the suppression or survival of the incident particle (with
unchanged weight) is decided by Russian Roulette with a probability equal to the ratio between the biased and
the physical interaction probability at the sampled distance. The weight of the secondary particle is adjusted
by the same ratio.
6. When using option EMF–BIAS, and in particular when choosing the Russian Roulette alternative, it is suggested
to set also a weight window (cards WW–FACTOr, p. 270 and WW–THRESh, p. 275) in order to avoid too large
weight fluctuations.
7. LAMBCOMP (LBRRCOMP) and LAMBBREM (LBRRBREM) are synonyms: i.e., input concerning photon in-
teraction biasing given with SDUM = LAMBBREM (LBRRBREM) is accepted and treated in the same way as
with SDUM = LAMBCOMP (LBRRCOMP); and input concerning electron/positron interaction biasing with
SDUM = LAMBCOMP (LBRRCOMP) is the same as with LAMBBREM (LBRRBREM). This allows to issue just
a single EMF–BIAS card requesting both electron and photon interaction biasing at the same time.
8. Option EMF–BIAS concerns only electromagnetic interactions; photonuclear interaction biasing is provided by
option LAM–BIAS (p. 150).
Input Commands 111
Example 1 (number-based):
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
EMF-BIAS 152. 0. 5.E-4 16. 20. 2.LPBEMF
* LPB is applied in regions 16, 18 and 20 as regards Compton scattering
* below 0.5 MeV and positron annihilation in flight and at rest.
* Code 152 = 2^3 (annihilation at rest) + 2^4 (Compton) + 2^7
* (annihilation in flight).
Example 2 (number-based):
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
EMF-BIAS 1022. 0.0 0.0 3.0 8.0
* LPB is applied in regions 3, 4, 5, 6, 7 and 8 for all electron and photon
* interactions at all energies
Example 3 (number-based):
112 EMF–BIAS
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
EMF-BIAS 1022. 0.0 0.0 1.0 15.0
EMF-BIAS -1. 0.0 0.0 7.0 11.0 2.0
WW-FACTOR 0.5 5.0 1.0 1.0 15.0
WW-FACTOR 0.5 5.0 0.2 7.0 11.0 2.0
WW-THRESH 1.0 0.001 20.0 3.0 4.0
WW-THRESH 1.0 1.E-4 20.0 7.0
* The above example illustrates the combined use of leading particle biasing
* and a region-dependent weight-window. Leading particle biasing is requested
* in all regions from 1 to 15, except 7, 9 and 11. To avoid too large weight
* fluctuations, a weight window is defined such that at the lowest energies
* (=< 20 keV for photons and =< 200 keV for electrons in regions 7, 9, 11;
* =< 100 keV for photons and <= 1 MeV for electrons in the other regions),
* Russian Roulette will be played for particles with a weight =< 0.5 and
* those with weight larger than 5.0 will be splitted. The size of this window
* (a factor 10) is progressively increased up to 20 at the higher threshold
* (200 MeV for both electrons and photons in regions 7, 9 and 11, 1 GeV in
* the other regions).
The same example, name-based, assuming that the 15 regions are all the regions of the problem:
EMF-BIAS 1022. 0.0 0.0 first @LASTREG
EMF-BIAS -1. 0.0 0.0 seventh eleventh 2.0
WW-FACTOR 0.5 5.0 1.0 1.0 15.0
WW-FACTOR 0.5 5.0 0.2 seventh eleventh 2.0
WW-THRESH 1.0 0.001 20.0 ELECTRON POSITRON
WW-THRESH 1.0 1.E-4 20.0 PHOTON
Input Commands 113
7.19 EMFCUT
Sets the energy thresholds for electron and photon production in different
materials, and electron and photon transport cutoffs in selected regions.
It also allows to set an arbitrary energy threshold for all electron and photon
interactions managed by EMF on a material basis. This is of course non-
physical and it is provided primarily for particular studies where the user
wants to switch off selectively a physical process.
Only meaningful when the EMF option is chosen (explicitly or implicitly via
option DEFAULTS).
WHAT(4) = lower bound (or corresponding name) of the Fluka material number where e± and photon
production thresholds respectively equal to WHAT(1) and WHAT(2) apply. The material
numbers or names are those pre-defined or assigned using a MATERIAL card.
(“From material WHAT(4). . . ”)
Default = 3.0
WHAT(5) = upper bound (or corresponding name) of the Fluka material number where e± and photon
production thresholds respectively equal to WHAT(1) and WHAT(2) apply. The material
numbers or names are those pre-defined or assigned using a MATERIAL card.
(“. . . to material WHAT(5). . . ”)
Default = WHAT(4)
Default (option EMFCUT with SDUM = PROD–CUT not requested): production cutffs in a material
are set equal to the lowest transport cutoffs in the regions with that material.
114 EMFCUT
> 0.0: electron and positron cutoff is expressed as total energy (kinetic plus rest mass)
< 0.0: electron and positron cutoff is expressed as kinetic energy
= 0.0: ignored.
This value can be overridden in the user routine UBSSET (p. 370) by assigning a value
to variable ELECUT
Default : the e± transport cutoff is set equal to the production threshold for discrete electron
interactions
WHAT(2) > 0.0: photon transport energy cutoff (GeV)
= 0.0: ignored.
This value can be overridden in the user routine UBSSET (p. 370) by assigning a value
to variable GAMCUT
Default : the photon transport cutoff is set equal to threshold for photon production by
electron bremsstrahlung
WHAT(4) = lower bound (or corresponding name) of the region indices with electron cutoff equal to
|WHAT(1)| and/or photon cutoff equal to WHAT(2) and/or leading particle biasing.
(“From region WHAT(4). . . ”)
Default = 2.0
WHAT(5) = upper bound (or corresponding name) of the region indices with electron cutoff equal to
|WHAT(1)| and/or photon cutoff equal to WHAT(2) and/or leading particle biasing.
(“. . . to region WHAT(5). . . ”)
Default = WHAT(4)
Default (option EMFCUT with SDUM = blank not requested): transport cutoffs in a region are set equal
to the production cutoffs in the material of that region.
WHAT(2) > 0.0: kinetic energy threshold (GeV) for Bhabha/Møller scattering
= 0.0: ignored
< 0.0: resets to default
Default = 1.0
WHAT(3) > 0.0: kinetic energy threshold (GeV) for e± photonuclear interactions
= 0.0: ignored
< 0.0: resets to default
Input Commands 115
Default = 0.0
WHAT(3) > 0.0: energy threshold (GeV) for gamma pair production
= 0.0: ignored
< 0.0: resets to default
Default = 0.0
WHAT(4) = lower bound (or corresponding name) of the material indices in which the respective thresh-
olds apply
(“From material WHAT(4). . . ”)
Default = 3.0
WHAT(5) = upper bound (or corresponding name) of the material indices in which the respective thresh-
olds apply.
(“. . . to material WHAT(5). . . ”)
Default = WHAT(4)
Default (option EMFCUT not requested): Transport cutoffs are set equal to production thresholds for
discrete electron interactions and for discrete photon production.
Notes
1. Default values are available for the electron and photon production thresholds in electromagnetic interactions,
but they are generated by a complex logic based on possible other settings (transport cutoffs, delta-ray pro-
duction thresholds, DEFAULTS card, other defaults). It is not always guaranteed that the resulting values be
appropriate to the problem of interest. Therefore, in order to have a good control of the physics, the user is
recommended to provide explicit threshold values, or at least to check them on the main output.
2. Electron and positron transport cutoffs (indicated in the main output with Ecut) and photon transport cutoffs
(Pcut) are assigned by region, while electron/positron production cutoffs (Ae) and photon production cutoffs
(Ap) are assigned by material.
When the transport cutoffs in a given region are larger than those of production for discrete interactions in
the corresponding material, particles with energy higher than production thresholds and lower than transport
cutoffs are not transported but are still generated (their energy is deposited at the point of production). It is
suggested to avoid such a situation unless it is really necessary, as particle generation demands a considerable
computer time and partially offsets the gain due to a higher transport cutoff.
When instead the transport cutoffs are lower than the production ones, they are increased to be equal to them.
3. The minimum threshold energy for transport and production of photons is 100 eV. For electrons and positrons,
it is 1 keV.
Example 1 (number-based):
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...
EMFCUT -1.0E-5 1.0E-5 0.0 4.0 8.0 PROD-CUT
* A production threshold of 10 keV is set for electrons, positrons and
* photons in all materials from 4 to 8.
Example 2 (number-based):
EMFCUT -0.002 2.E-4 1.0 1.0 15.0
EMFCUT -1.0E-4 1.E-5 -1.0 7.0 11.0 2.0
* A kinetic energy transport cutoff for electrons and positrons is set
* at 100 keV in regions 7, 9 and 11 and at 2 MeV in all other regions from 1
* to 15. Photon transport cutoff is set equal to 10 keV in regions 7, 9, 11
* and to 200 keV in the other regions.
Input Commands 117
The same example, name-based, assuming that the 15 regions are all the regions of the problem::
EMFCUT -0.002 2.E-4 1.0 FirstReg @LASTREG
EMFCUT -1.0E-4 1.E-5 -1.0 Seventh Eleventh 2.0
118 EMFFIX
7.20 EMFFIX
Sets the size of electron steps corresponding to a fixed fraction of the total
energy. The setting is done by material, giving as many EMFFIX defini-
tions as needed. Only meaningful when the EMF option has been requested
(explicitly or implicitly via option DEFAULTS).
Default : 20 % (it is strongly recommended not to set higher than this value!)
SDUM = PRINT : electron and positron dE/dx and maximum allowed step tabulations for this
material are printed
= NOPRINT : tabulations are not printed (cancels any previous PRINT request for the given
materials)
blank : ignored
Default = NOPRINT
Default (option EMFFIX not requested): the energy lost per step is 20 % for all materials
Notes
1. The default provided (step length such that 20 % of the energy is lost) is acceptable for most routine problems.
In dosimetry problems and in thin-slab geometries it is recommended not to exceed 5–10 %.
For a detailed discussion of the step length problem, see [81].
2. Related options are STEPSIZE (p. 232), MCSTHRESh (p. 170), FLUKAFIX (p. 133) and MULSOPT (p. 174).
MCSTHRESh and FLUKAFIX concern only heavy charged particles (hadrons and muons), while STEPSIZE
applies to all charged particles (hadrons, muons, electrons and positrons). However, STEPSIZE defines the
steplength in cm and by region, while EMFFIX relates the step length to the maximum allowed energy loss and
is based on materials. STEPSIZE works also in vacuum and is adapted to problems with magnetic fields; if
both options are used, the smallest of the two steps is always chosen. Note however that if a step required by
STEPSIZE is too small for the Molière algorithm, multiple scattering is turned off (contrary to what happens
with EMFFIX). MULSOPT is very CPU-time consuming; however, it gives the highest accuracy compatible
with the Molière theory. It is used rarely, mostly in low-energy and in backscattering problems.
Input Commands 119
7.21 EMFFLUO
Activates a detailed treatment of photoelectric interactions and of the fol-
lowing atomic deexcitation, with production of fluorescence X-rays (and a
rough treatment of Auger electrons)
This option, meaningful only if the EMF option has been requested (explicitly or implicitly via option
DEFAULTS), requires a special unformatted file, pre-connected as logical input unit 13 (see Chap. 3). This
file is delivered with the Fluka code.
WHAT(3) = upper bound (or corresponding name) of the material indices in which fluorescence is acti-
vated
(“. . . to material WHAT(3). . . ”)
Default = WHAT(2)
Default (option EMFFLUO not requested): fluorescence is not simulated unless option DEFAULTS is
chosen with SDUM = CALORIMEtry, EM–CASCA, ICARUS or PRECISIOn.
See also Table 7.1 at p. 97.
Notes
1. Selection of EMFFLUO option is only meaningful for a material defined with electron and photon cutoffs lower
than the highest K-edge in the elements constituents of that material.
2. When EMFFLUO is activated for a compound material, if the incident photon energy is lower than the highest
K-edge for any constituent element, Fluka uses separate parameterised representations of the photoelectric
cross section between different edges of each constituent element (all levels are tabulated).
If the photon energy is higher than the highest K-edge, average cross sections for the compound are used, but
Fluka still samples a single element on which to interact, in order to generate the correct fluorescence photon
or Auger electron.
Example (number-based):
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...
EMF
EMFCUT 1.E-5 1.E-5 0.0 17. 18.
EMFFLUO 1.0 17. 18.
* In the above example, the user activates fluorescence production in
* Lead and Tantalum (standard FLUKA material numbers 17 and 18). The
* photon and electron cutoff has been set at 10 keV (the K-edge for
* Pb and Ta is of the order of 100 keV).
Input Commands 121
7.22 EMFRAY
Activates Rayleigh (coherent) scattering and Compton binding corrections
and profile function corrections in selected regions. Only meaningful when
the EMF option has been requested, explicitly or implicitly via option DE-
FAULTS.
WHAT(1) ≥ 1.0: both Rayleigh scattering and Compton binding corrections are activated, no Compton
profile function
= 2.0: only Rayleigh scattering is activated
= 3.0: only Compton binding corrections are activated
= 4.0: all, Rayleigh, binding corrections, and profile function in Compton are activated
= 5.0: only Rayleigh scattering is activated
= 6.0: only binding corrections and profile function in Compton are activated
= 0.0: ignored
< 0.0: no Rayleigh scattering and neither binding corrections nor Compton profile function
Default : if option DEFAULTS is used with SDUM = NEW–DEFAults or HADROTHErapy, the
default is 3.0.
If it is not used, or is used with SDUM = CALORIMEtry, EM–CASCAde, ICARUS or
PRECISIOn, the default is 1.0.
With any other value of SDUM, the default is 0.0.
WHAT(2) = lower bound (or corresponding name) of the region indices where Rayleigh scattering and/or
Compton binding corrections and/or Compton profile function are requested
(“From region WHAT(2). . . ”)
Default = 2.0
WHAT(3) = upper bound (or corresponding name) of the region indices where Rayleigh scattering and/or
Compton binding corrections and/or Compton profile function are requested
(“. . . to region WHAT(3). . . ”)
Default = WHAT(2)
Default (option EMFRAY not requested): if option DEFAULTS is used with SDUM = NEW–DEFAults or
HADROTHErapy, binding corrections in Compton scattering are activated, but not Rayleigh
scattering.
If DEFAULTS is not used, or is used with SDUM = CALORIMEtry, EM–CASCAde, ICARUS or
PRECISIOn, both are activated.
With any other value of SDUM, binding corrections and Rayleigh scattering are not activated.
See also Table 7.1 at p. 97.
Notes
1. The treatment of Rayleigh scattering is rather poor for non monoatomic materials (it assumes additivity and
does not take into account important molecular effects). However, Rayleigh scattering, in general, has little
effect on energy deposition and on particle transport.
2. The full treatment of electron binding and motion in Compton scattering (referred above as ”binding corrections
and profile function”) is now available. It is particular important for low energies and/or heavy materials, and
in general for all problems where the best accuracy for photon transport is requested.
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...
EMF
EMFRAY 1.0 8.0 12.0 4.0
EMFRAY 3.0 9.0 15.0 2.0
* In the above example, Compton binding corrections are requested in
* regions 8 to 15, excluding regions 10 and 14.
* Rayleigh scattering is requested only in regions 8 and 12
7.23 EVENTBIN
For calorimetry only.
Superimposes a binning structure to the geometry and prints the result after
each “event”
For a description of input for this option, refer to USRBIN (7.77). Meaning of WHATs and SDUM is practically
identical for the two options. The only difference here is that if WHAT(1) is given with a negative sign, only
non-zero data (“hit cells”) are printed.
For Cartesian binning, WHAT(1) = 0.0 prints all cells, and a negative number > -0.5 must be used to
print only the “hit cells”.
See Note 2 below and Note 3 to option ROTPRBIN (p. 224).
This card is similar to USRBIN (p. 249), but the binning data are printed at the end of each event (primary
history).
Information about the binning structure is printed at the beginning, then binning data are printed at the
end of each event without any normalisation (i.e., energy per bin and not energy density per unit incident
particle weight).
If the sign of WHAT(1) in the first card defining the binning is negative, only those elements of the binning
which are non zero are printed at the end of each event, together with their index.
To read EVENTBIN unformatted output, see instructions for USRBIN, with the following differences:
Notes
1. Normally, this option is meaningful only in fully analogue runs. Any biasing option should be avoided, and a
GLOBAL declaration with WHAT(2) < 0.0 is recommended (p. 141). Also, it is recommended to request an
option DEFAULTS with SDUM = CALORIMEtry, ICARUS or PRECISIOn (p. 92).
2. In many cases, binnings defined by EVENTBIN result in a number of sparse “hit” cells, while all other bins
are empty (scoring zero). In such cases, it is convenient to print to file only the content of non-empty bins. In
these circumstances, it may also be convenient to allocate a reduced amount of storage (see option ROTPRBIN,
p. 224), and in particular the Note 3 to that option.
Input Commands 125
Example 1 (number-based):
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....
EVENTBIN 10.0 208.0 25.0 150.0 200.0 180. Firstscore
EVENTBIN -150.0 100.0 -20.0 75.0 50.0 20.0 &
* In the above example, the user requests an event-by-event scoring of
* energy deposition (generalised particle 208), in a Cartesian
* three-dimensional array. The bins are 4 cm wide in x (75 bins between
* x = -150 and x = 150), 2 cm wide in y (50 bins between y = 100 and
* y = 200), and 10 cm wide in z (20 bins between z = -20 and z = 180).
* The results are written, formatted, on logical unit 25. The name given
* to the binning is "Firstscore".
Example 2 (number-based):
* Event-by-event scoring of photon fluence in a cylindrical mesh of
* 1200x3800 bins 1 mm wide in R and Z. Results are written unformatted on
* logical unit 27. The user requests not to print bins with zero content.
* The binning name is "Bigcylindr".
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....
EVENTBIN -11.0 7.0 -27.0 600.0 0.0 1900.0 Bigcylindr
EVENTBIN 0.0 0.0 0.0 1200.0 0.0 3800.0 &
7.24 EVENTDAT
For calorimetry only.
Prints event by event the scored star production and/or energy deposition
in each region, and the total energy balance. energy deposition star
production
EVENTDAT requests separate scoring by region of energy and/or star density for each event (primary history).
The quantities to be scored are defined via a SCORE command (see SCORE for details, p. 226).
As for SCORE, a maximum per run of 4 different energy or star densities is allowed.
The EVENTDAT output includes also a detailed energy balance event by event.
WHAT(1) = output unit. If WHAT(1) < 0.0, the output is unformatted. Values of |WHAT(1)| < 21
should be avoided (with the exception of +11).
Default = 11 (standard output)
Notes
ENDIST(I) = 12 energy contributions to the total energy balance, some of which appear also at the end of
the standard output. Here they are given separately for each primary history (in GeV) and not
normalised to the weight of the primary. Note that some of the contributions are meaningful only
in specific contexts (e.g., if low-energy neutron transport has been requested).
ENDIST(1) = energy deposited by ionisation
ENDIST(8) = energy deposited by low-energy neutrons (kerma, proton recoil energy not included)
ENDIST(10) = energy lost in endothermic nuclear reactions (gained in exothermic reactions if < 0.0) above
20 MeV (not implemented yet)
ENDIST(11) = energy lost in endothermic low-energy neutron reactions (gained in exothermic reactions if
< 0.0) (not implemented yet)
ENDIST(12) = missing energy
REGSCO(IR,ISC) = energy or stars (corresponding to the ISCth generalised particle distribution) deposited or
produced in the IRth region during the current primary history Not normalised, neither to the
primary weight nor to the region volume.
NDUM, DUM1, DUM2 = three dummy variables, with no meaning
ISEED1, ISEED2, SEED1, SEED2, SOPP1, SOPP2 = random number generator information to be read in order
to reproduce the current sequence (skipping calls, see option RANDOMIZe, p. 217).
2. All the above quantities are written in single precision (REAL*4), except RUNTIT and RUNTIM (which are of type
CHARACTER) and those with a name beginning with I,J,K,L,M,N (which are integer).
3. The different items appearing in the EVENTDAT energy balance may sometimes give overlapping information
and are not all meaningful in every circumstance (for instance residual excitation energy is meaningful only
if gamma deexcitation has not been requested). Unlike the balance which is printed at the end of standard
output, these terms are not additive.
4. An example on how to read EVENTDAT unformatted output is given below.
PROGRAM RDEVDT
CHARACTER*80 RUNTIT, FILNAM
CHARACTER*32 RUNTIM
DIMENSION ISCORE(4), ENDIST(12), REGSCO(5000,4)
WRITE(8,’(A80)’) RUNTIT
WRITE(8,’(A32)’) RUNTIM
WRITE(8,’(A,I6,5X,A,I4)’) ’Number of regions: ’, NREGS,
& ’ Number of scored quantities: ’, NSCO
WRITE(8,’(A,4I6)’) ’The scored quantities are: ’,
& (ISCORE(IS), IS = 1, NSCO)
300 CONTINUE
WRITE(8,*) "End of a run of ", NCASE, " particles"
CLOSE (UNIT = 7)
CLOSE (UNIT = 8)
END
Example (number-based):
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
SCORE 208. 211. 201. 8. 0. 0.
EVENTDAT -23. 0. 0. 0. 0. 0. EVT.SCOR
* In this example, the user requests (with option SCORE) scoring of
* total and electromagnetic energy deposition, total stars and
* neutron-produced stars. The average scores for each region will be
* printed on standard output (as an effect of SCORE command), and
* corresponding scores, as well as the energy balance, will be written
* separately for each primary particle on an unformatted file EVT.SCOR
Input Commands 129
7.25 EXPTRANS
Requests an exponential transformation (“path stretching”)
WHAT(1) ≥ 0.0: WHAT(2). . . WHAT(5) are used to input a parameter defining the exponential transfor-
mation in various regions
< 0.0: WHAT(2). . . WHAT(5) are used to define the particles to which the exponential trans-
formation must be applied
Default = 0.0
WHAT(3) = lower bound (or corresponding name) of the region indices with exponential transformation
parameter η = WHAT(2)
(“From region WHAT(3). . . ”)
Default = 2.0
WHAT(4) = upper bound (or corresponding name) of the region indices with exponential transformation
parameter equal to WHAT(2)
(“. . . to region WHAT(4). . . ”)
Default = WHAT(3)
WHAT(2) = lower bound (or corresponding name) of the particle indices to which exponential transfor-
mation is to be applied
(“From particle WHAT(2). . . ”)
Default = 1.0
WHAT(3) = upper bound (or corresponding name) of the particle indices to which exponential transfor-
mation is to be applied
(“. . . to particle WHAT(3). . . ”)
Default = WHAT(2) if WHAT(2) > 0.0, otherwise = 40.0 (low-energy neutrons)
7.26 FIXED
Re-establishes fixed-format input after a FREE command
Default (option FIXED not given): the following input is in free format if a FREE (p. 134) command has
been previously issued, or is all in free format if it has been chosen via a GLOBAL command
(p. 141).
In any other case, input is in fixed format.
Input Commands 133
7.27 FLUKAFIX
Sets the size of the step of muons and charged hadrons to a fixed fraction
of the kinetic energy in different materials
WHAT(1) = fraction of the kinetic energy to be lost in a step (must not be > 0.2)
Default : if option DEFAULTS is used with SDUM = ICARUS, the default is 0.02.
With SDUM = HADROTHErapy or PRECISIOn, the default is 0.05.
If SDUM = CALORIMEtry, the default is 0.08.
With any other SDUM value, or if DEFAULTS is missing, the default is 0.1.
WHAT(4) = lower index bound (or corresponding name) of materials where the specified energy loss per
step is to be applied.
(“From material WHAT(4). . . ”)
Default = 3.0
WHAT(5) = upper index bound (or corresponding name) of materials where the specified energy loss per
step is to be applied.
(“. . . to material WHAT(5). . . ”)
Default = WHAT(4)
Default (option FLUKAFIX not given): the defaults listed above apply. See also Table 7.1 at p. 97.
Notes
1. Usually there is no need for changing the default value of 10 % (0.1) for WHAT(1).
2. The input value is actually applied as such only at intermediate energies (between about a few tens of MeV and
1 GeV): at low energies it is slowly increased to twice the requested value, while at high energies it decreases
1
to a limit about 100 of the input value.
7.28 FREE
Activates free-format input
Default (option FREE not given): input must follow the standard Fluka format (A8,2X,6E10.0,A8)
(see Chap. 6), unless free format has been chosen via a GLOBAL command (p. 141)
Notes
1. FREE can be given at any point in input, excluding the geometry. All successive input (excepted the geometry)
will be read in free format, VOXELS and LATTICE cards included, until a FIXED (p. 132) command will possibly
re-establish fixed-format input.
2. In free-format input, keywords, WHATs and SDUMs do not need to be aligned in columns but can be written
anywhere on the input line, alternating with “separators” in a manner similar to that of list-oriented format
in Fortran (but character strings — keywords and SDUMs — must not be put between quotes!). A separator
is any one of the following:
– One of the five characters , (comma), ; (semicolon), / (slash), \ (backslash), : (colon), preceded
or not by one or more blanks, and followed or not by one or more blanks
– One or more successive blanks without any non-blank separator
3. Different separators may be used on the same line.
4. If a non-blank separator is immediately followed by another one (with or without blanks in between), a value
of 0.0 is assumed to be given between them.
5. Zero values must be given explicitly or as empty places between two non-blank separators as explained above.
6. Geometry input (i.e., input between GEOBEGIN and GEOEND cards not included, see Chap. 8) must still
respect the column format described in Chapter 8, except if free-format geometry input has been requested by
GLOBAL.
7. PLOTGEOM input, whether in a separate file PLG.GFXINDAT or directly after a PLOTGEOM card in standard
input, must still be formatted as shown in the description of that option (see p. 209).
8. If FREE has been issued, from then on all constants must be written without any blank imbedded (e.g., 5.3 E-5
is not allowed, but must be written 5.3E-5 or 5.30E-5)
9. Free format, if requested, applies to option cards of the form
KEYWORD WHAT(1) WHAT(2) . . . . . . WHAT(6) SDUM
but not to any data card specific to certain options (for instance the card following TITLE)
10. Free format can be requested also by option GLOBAL (p. 141), but extended to the whole input and not only
from the point where the command is issued. GLOBAL can also be used to request free format geometry input.
Example 1:
The following fixed-format input line
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
BEAM 20. 0.0 -1.0 E-2 -0.02 1.0 PION+
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
is equivalent to
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
FREE
BEAM 20. 0.0 0.0 -1.0E-2 -0.02 1.0 PION+
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
or to
Input Commands 135
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
FREE
BEAM, 20.,0.0 , 0.0, -1.0E-2; -0.02 1.0 /PION+ ! 20 GeV/c momentum?
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
etc. . . Note the possibility to insert comments at the end of the line!
Example 2:
At any moment the FIXED card re-establishes the fixed-format input mode.
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
FREE
BEAM, 20.,0.0 , 0.0, -1.0E-2; -0.02 1.0 /PION+ ! 20 GeV/c momentum?
FIXED
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
BEAMPOS 0. 0. 200. -0.012 0.0652336 NEGATIVE
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
136 GCR-SPE
7.29 GCR-SPE
Initialises Galactic Cosmic Ray or Solar Particle Event calculations
SDUM = name identifying both the spectra files (estension .spc) and the data file (estension .sur)
produced by the auxiliary program atmloc 2011.f (see Note 2).
Default (option GCR-SPE not given): no GCR/SPE calculation
Notes
1. Cosmic ray calculations, initialised by GCR-SPE, are defined by means of command SPECSOUR and a number
of auxiliary programs. Details are presented in Chap. 16
2. The cards for the geometry description of the atmospheric shells must be prepared using the auxiliary programs
and datacards in the directory $FLUPRO/gcrtools. Program Atmloc 2011.f writes a file atmloc.geo, containing
the geometry input to be inserted into the Fluka input file (or to be read by setting WHAT(3) in the GEOBEGIN
card), a file atmlocmat.cards containing the extra material assignments, and a file atmloc.sur containing
auxiliary data and the scoring areas. The user shall rename the file atmloc.sur to <xxxxxxx>.sur, where
<xxxxxx> is an identifier of exactly 7 characters which must appear also in the input spectra file names: the
spectra must have the names <zz><xxxxxxx>.spc, where <zz> is the atomic number of the primary. The
example spectra distributed with FLUKA come with two identifiers : <zz>phi0465 for solar minimum and
<zz>phi1440 for solar maximum, and with zz=01-28.
Input Commands 137
7.30 GEOBEGIN
Starts the geometry description
WHAT(3) > 0.0: logical unit for geometry input. The name of the corresponding file must be input on
the next card if WHAT(3) is different from 5.0. Note that values of WHAT(3) 6= 5.0
and < 21.0 must be avoided because of possible conflicts with Fluka pre-defined
units.
Default = 5.0 (i.e., geometry input follows)
WHAT(4) > 0.0: logical unit for geometry output. If different from 11, the name of the corresponding
file must be input on the next card if WHAT(3) = 0.0 or 5.0, otherwise on the card
following the next one. Values of WHAT(3) 6= 11.0 and < 21.0 must be avoided
because of possible conflicts with Fluka pre-defined units.
Default = 11.0 (i.e., geometry output is printed on the standard output)
WHAT(5) = ip0 + ip1 × 1000, where ip0 indicates the level of parentheses optimisation and ip1 , if > 0,
forces the geometry optimisation even when there are no parentheses
Default (option GEOBEGIN not given): not allowed! GEOBEGIN and GEOEND must always be present.
Notes
Example 1:
* CG Input follows, output is printed as part om Main Output
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
GEOBEGIN 0. 0. 0. 0. 0. 0.COMBINAT
Example 2:
* CG Input read from file BigHall.geo, output printed as part om Main Output
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
GEOBEGIN 0. 0. 25. 0. 0. 0.COMBINAT
BigHall.geo
Example 3:
* CG Input follows, output is printed on file geo2.out
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
GEOBEGIN 0. 0. 0. 26. 0. 0.COMBINAT
geo2.out
Example 4:
* CG Input read from file BigHall.geo, output printed on file geo2.out
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
GEOBEGIN 0. 0. 25. 26. 0. 0.COMBINAT
BigHall.geo
geo2.out
Input Commands 139
7.31 GEOEND
Ends the geometry definitions.
This option can also be used to debug the geometry: in this case it may
extend over two cards
Normally, only one GEOEND card is issued, at the end of the geometry description, with all the WHATs and
the SDUM equal to zero or blank. However, GEOEND can also be used to activate the Fluka geometry
debugger: in this case one or two GEOEND cards are needed, providing the information described below.
WHAT(1) = Number of mesh intervals in the X-direction between Xmin and Xmax
Default = 20.0
WHAT(2) = Number of mesh intervals in the Y-direction between Ymin and Ymax
Default = 20.0
WHAT(3) = Number of mesh intervals in the Z-direction between Zmin and Zmax
Default = 20.0
WHAT(4) – WHAT(6): not used
SDUM = “ & ” in any position in column 71 to 78 (or in the last field if free format is used)
Default (option GEOEND not given): not allowed! GEOBEGIN and GEOEND must always be present.
Notes
1. The geometry debugger can detect both undefined points (points which are not included in any defined region)
and multiple defined points (points which are included in more than one region (i.e., there are overlapping
regions) in the selected X,Y,Z mesh. The first kind of error is likely to cause a run-time error every time a
particle is passing through the undefined zone, the second one is more subtle and it is not usually detected at
run-time. It is impossible to predict to which actual region such multiple defined points will be assigned.
2. The geometry debugger cannot assure that a bug-free geometry input is what the user would like to describe,
however it seldom occurs that users are able to define a bug-free input different from what they wanted to
describe.
It must be stressed that only the points of the defined X,Y,Z mesh are checked, therefore mesh dimensions and
pitches should be chosen according to the present geometry, taking into account region thicknesses etc.
140 GEOEND
3. It must be stressed that only the points of the defined X,Y,Z mesh are checked, therefore mesh dimensions and
pitches should be chosen according to the present geometry, taking into account region thicknesses etc.
4. Another useful tool is available for this purpose: the Plotgeom program, which is activated by means of the
PLOTGEOM command (7.56).
5. It must be stressed too that the geometry debugger can be very time consuming, so don’t ask for 100 µm
pitches in X,Y,Z over a 10 m distance, or the code will run forever! Make use as much as possible of geometry
symmetries (for example for a cylindrical geometry there is no need for a 3-D scan) and possibly “zoom” with
fine meshes only around points where you suspect possible errors. Note that as many areas as wished can be
scanned with different meshes of the same geometry, simply changing the mesh parameters each time.
6. Warning: the program does not stop if an error is detected but a message is issued on the output units, and
checking goes on. If the code is “stepping” into an area erroneously defined, it is likely that plenty of such error
messages will be printed. If your operating system allows inspection of output files before they are closed, check
the size of your output from time to time. If it is growing too much, stop the code and correct the geometry
for the printed errors before restarting the debugger.
7.32 GLOBAL
Makes a global declaration about the present run, setting some important
parameters that must be defined before array memory allocation
WHAT(2) = declaration of “how analogue” this run must be: fully analogue, as biased as possible, or
automatically chosen by the program?
< 0.0: as analogue as possible (provided the input is consistent with this choice)
> 1.0: as biased as possible (allowed also for a run in which no explicit biasing option is
requested: in this case it simply means “do not try to be analogue”)
0.0 ≤ WHAT(2) ≤ 1.0: as analogue as decided by the program according to the selected
biasing options
Default = 0.0(input decides the amount of biasing)
WHAT(3) = declaration about the use of the DNEAR variable (see Note 4) when computing physical steps:
< 0.0: always use DNEAR when computing the tentative length of particle steps (it can cause
non reproducibility of the random number sequence when starting from different
histories, but it does not affect physics)
> 0.0: do not use DNEAR when computing the tentative length of particle steps (full re-
producibility of the random number sequence starting from different histories, some
penalty in CPU time)
= 0.0: use DNEAR when computing the tentative length of particle steps only when the random
number sequence reproducibility is assured (full reproducibility of random number
sequence within the same geometry package, possible non reproducibilities among
different geometry packages describing the same geometry)
Default = 0.0 (random number sequence reproducible within the same geometry package)
WHAT(5) : flag to request free format in the geometry input for bodies and regions. This format is
described in 8.2.3.2 and 8.2.7.3, and requires the use of names (alphanumerical 8-character
strings beginning by alphabetical) as identifiers. Parentheses, are allowed.
< 0.0: resets the default
= 0.0: ignored
> 0.0: geometry input for bodies and regions will be in free format and name-based
Notes
1. In most cases the user should not worry about the number of geometry regions. Despite the fact that Fluka
input does not follow any specific order, the program is able to manage initialisation of all geometry-dependent
arrays by allocating temporary areas of memory even before the actual dimensions of the problem are known.
The unused parts of such areas are recovered at a later time when the whole input has been read. However, if
the number of regions is very large (> 1000), the program needs to be informed in order to increase the size
of such temporary areas. This information must be given at the very beginning: therefore GLOBAL (together
with DEFAULTS, MAT–PROP and PLOTGEOM) is a rare exception to the rule that the order of Fluka input
cards is free.
2. The “hard” limit of 20000 regions represents the maximum that can be obtained without recompiling the
program. It can be overridden, but only by increasing the value of variable MXXRGN in the INCLUDE file DIMPAR
and recompiling the whole code. In this case, however, it is likely that the size of variable NBLNMX in INCLUDE
file BLNKCM will have to be increased too.
3. In a “fully analogue” run, each interaction is simulated by sampling each exclusive reaction channel with its
actual physical probability. In general, this is not always the case, especially concerning low-energy neutron
interactions. Only issuing a GLOBAL declaration with WHAT(2) < 0.0 can it be ensured that analogue
sampling is systematically carried out whenever it is possible. The lack of biasing options in input is not
sufficient for this purpose. This facility should be used in problems where fluctuations or correlations cannot
be ignored, but it is likely to lead to longer computing times.
4. DNEAR designates the distance between the current particle position and the nearest boundary (or a lower bound
to that distance), and it is used by Fluka to optimise the step length of charged particles. The concept and the
name have been borrowed from the Egs4 code [140], but the Fluka implementation is very different because
it is fully automatic rather than left to the user, and it is tailored for Combinatorial Geometry, where a region
can be described by partially overlapping sub-regions (described in input by means of the OR operator). The
sequential order in which overlapping sub-regions are considered when evaluating DNEAR is irrelevant from the
point of view of particle tracking, but can affect the random number sequence. This does not have any effect on
the average results of the calculation, but the individual histories can differ due the different random number
sequence used. Option GLOBAL can be used in those cases where the user wants to reproduce exactly each
particle history, or on the contrary to forgo it in order to get a better step optimisation.
5. Free format can be requested also by option FREE, but only for the part of input that follows the command.
FREE cannot be used to request free format geometry input. See the Notes to FREE for the rules governing
separators (Note 2 and following, on p. 134).
6. Free-format, name-based geometry input can be requested also by setting SDUM = COMBNAME in command
GEOBEGIN (p. 137)
Example 1:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
TITLE
A fully analogue run (no other commands precede this TITLE card)
GLOBAL 2000. -1. 1. 0. 0. 0.
* This run needs more than the default maximum number of regions. It is
* requested to be as analogue as possible and to avoid using DNEAR if
* it risks to affect the random number sequence.
Input Commands 143
Example 2:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
TITLE
Full free-format input (no other commands precede this TITLE card)
GLOBAL 0.0 0.0 0.0 2.0 1.0 0.
* The following input will be all in free format (both the FLUKA commands
* and the geometry description)
144 HI–PROPErt
7.33 HI–PROPErt
Specifies the properties of a heavy ion primary, or a radioactive isotope
primary
12
Default (option HI–PROPErt not given): a HEAVYION projectile is assumed to be C in the ground
state.
Note
1. Option HI–PROPErt is used to specify the properties of a generic heavy ion primary declared by a BEAM
command (p. 71) with SDUM = HEAVYION, or by a user-written subroutine SOURCE (p. 363) with id-number
IJ = -2.
Example:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...
* Primary particles are 10 GeV Au-197 ions in the ground state:
BEAM -10.0 0.0 0.0 0.0 0.0 0. HEAVYION
HI-PROPE 79.0 197.0 0.0 0.0 0.0 0.
Input Commands 145
7.34 IONFLUCT
Calculates ionisation energy losses of charged hadrons, muons, and elec-
trons/positrons with ionisation fluctuations.
WHAT(1) ≥ 1.0: switches on restricted energy loss fluctuations for hadrons and muons
≤ -1.0: switches off restricted energy loss fluctuations for hadrons and muons
= 0.0: ignored
Default : restricted energy loss fluctuations for hadrons and muons are activated if option
DEFAULTS is missing or if it is used with SDUM = CALORIMEtry, EET/TRANSmut,
HADROTHErapy, ICARUS, NEW–DEFAults or PRECISIOn.
With any other SDUM value, they are not activated.
WHAT(2) ≥ 1.0: switches on restricted energy loss fluctuations for electrons and positrons
≤ -1.0: switches on restricted energy loss fluctuations for electrons and positrons
= 0.0: ignored
Default : restricted energy loss fluctuations for electrons and positrons are activated if op-
tion DEFAULTS is missing or if it is used with SDUM = CALORIMEtry, EM–CASCAde,
HADROTHErapy, ICARUS, NEW–DEFAults or PRECISIOn.
With any other SDUM value, they are not activated.
WHAT(3) : If WHAT(1) ≥ 1.0 (resp. WHAT(2) ≥ 1.0), WHAT(3) represents the accuracy parameter for
the ionisation fluctuation algorithm [73] (see 1.2.1.4) for hadrons and muons (resp. electrons
and positrons). The accuracy parameter can take integer values from 1 to 4 (corresponding
to increasing levels of accuracy)
< 0.0: resets to default
Default = 1.0 (minimal accuracy)
WHAT(4) = lower bound (or corresponding name) of the indices of the materials in which the restricted
energy loss fluctuations are activated
(“From material WHAT(4). . . ”)
Default = 3.0
WHAT(5) = upper bound (or corresponding name) of the indices of the materials in which the restricted
energy loss fluctuations are activated
(“. . . to material WHAT(5). . . ”)
Default = WHAT(4)
SDUM : blank
Delta rays below threshold for explicit generation are generated anyway: for close collisions down to
the threshold, and for distant collisions down to an internally computed value, such as to match the input
1st ionisation potential and the average number of primary ionisations per unit length.
146 IONFLUCT
WHAT(2) = number of primary ionisations per cm for a mimimum ionising particle (assumed to be a µ+
at βγ = 3). For gases it must be the value at NTP.
If set = 0 (valid value), only primary electrons related to close collisions will be produced
and WHAT(1) and WHAT(3) will be meanigless.
Default : No default
WHAT(4) = lower bound (or corresponding name) of the indices of the materials in which the choices
represented by WHAT(1), (2) and (3) apply
(“From material WHAT(4). . . ”)
Default = 3.0
WHAT(5) = upper bound (or corresponding name) of the indices of the materials in which the choices
represented by WHAT(1),(2) and (3) apply
(“. . . to material WHAT(5). . . ”)
Default = WHAT(4)
SDUM = PRIM–ION
Default (option IONFLUCT not given): ionisation fluctuations are simulated or not depending on op-
tion DEFAULTS as explained above. Explicit primary ionisation events are never simulated by
default.
See also Table 7.1 at p. 97.
Notes
1. The energy loss fluctuation algorithm is fully compatible with the DELTARAY option (p. 98). (See Example
below).
2. Primary ionisation electron energies are stored in COMMON ALLDLT at each step in the selected materials.
Use with care and possibly for gases only. The number of primary ionisations electrons can quickly escalate,
particularly when multiply charged ions are involved. No COMMON saturation crash should occur since the code
is piling up all the remaining primary electrons into the last COMMON location if no further one is available,
however CPU penalties can be severe if used without wisdom.
Example (number-based):
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
IONFLUCT 0.0 1.0 3.0 7.0 16.0 3.0
IONFLUCT 1.0 0.0 2.0 8.0 10.0 2.0
DELTARAY 1.E-3 0.0 0.0 10.0 11.0
* The special FLUKA algorithm for ionisation fluctuations is activated
* with accuracy level 3 for photons and electrons in materials 7, 10, 13 and
* 16 (Nitrogen, Aluminum, Silver and Mercury). The same algorithm is activated,
Input Commands 147
7.35 IONTRANS
Determines the transport of ions (A < 1), allowing to limit it to subsets
of light ions (A > 5) and to choose between approximate and full transport
(including nuclear interactions)
= 0.0: ignored
= -1.0: approximate transport (without interactions) of all light and heavy ions
-6.0 ≤ WHAT(1) ≤ -3.0: full transport of light ions with FLUKA id ≥ WHAT(1)
(-3 = d, -4 = t, -5 = 3 He, -6 = 4 He), and approximate transport of all other ions
Default = 0.0 (no ion transport, unless a ion beam is requested by the BEAM card (p. 71),
see Note 2)
WHAT(2) – WHAT(6): not used
Notes
1. When requested, interactions at energies larger than 125 MeV/n are performed provided that the external event
generators Dpmjet and Rqmd are linked (through the script $FLUPRO/flutil/ldpmqmd). For energies lower
than 125 MeV/n, the Bme event generator is already included in the main $FLUPRO/libflukahp.a library and
linked in the standard $FLUPRO/flukahp executable.
2. In the presence of a heavy ion beam, full transport of all ions is enabled by default (no need for IONTRANS).
Input Commands 149
7.36 IRRPROFIle
WHAT(2) : beam intensity of the newly defined (see WHAT(1)) irradiation interval
≥ 0.0: beam intensity in particles/s (0.0 is accepted)
< 0.0: considered as 0.0
Default = 0.0 particles/s
Note
1. Several cards can be combined up to the desired number of irradiation intervals. Decay times as requested by
DCYTIMES commands (p. 91) will be calculated from the end of the last one. Scoring during irradiation can
be obtained giving negative decay times in DCYTIMES.
Example:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
IRRPROFILE 1800. 1.5E12 250. 3.E10 4500. 4.2E12
* The profile defined consists of 1800 s of irradiation at an intensity of
* 1.5E12 particles/s, followed by 250 s at low intensity (3.E10 particles/s),
* and then a third 4500 s interval at 4.2E12 particles/s
150 LAM–BIAS
7.37 LAM–BIAS
Used to bias the decay length of unstable particles, the inelastic nuclear in-
teraction length of hadrons, photons, electrons and muons and the direction
of decay secondaries
SDUM = GDECAY selects decay length biasing and inelastic nuclear interaction biasing.
If SDUM = blank, decay life biasing and inelastic nuclear interaction biasing are selected.
Other LAM–BIAS cards with SDUM = DECPRI, DECALL, INEPRI, INEALL allow to restrict biasing to primary
particles or to extend it also to further generations.
The decay secondary product direction is biased in a direction indicated by the user by means of a unit
vector of components U, V, W (see Notes 4 and 5):
WHAT(1) = U (x-direction cosine) of the preferred direction to be used in decay direction biasing
Default = 0.0
WHAT(2) = V (y-direction cosine) of the preferred direction to be used in decay direction biasing
Default = 0.0
WHAT(3) = W (z-direction cosine) of the preferred direction to be used in decay direction biasing
Default = 1.0
WHAT(4) > 0.0: λ for decay direction biasing. The degree of biasing decreases with increasing lambda
(see Note 5)
= 0.0: a user provided routine (UDCDRL, p. 371) will be called at each decay event, to provide
both direction and lambda for decay direction biasing
< 0.0: resets to default (λ = 0.25)
Default = 0.25
WHAT(1) > 0.0: decay direction biasing is activated (see Notes 4 and 5):
= 0.0: ignored
< 0.0: decay direction biasing is switched off
WHAT(4) = lower bound of the particle id-numbers (or corresponding name) for which decay direction
biasing is to be applied
(“From particle WHAT(4). . . ”)
Default = 1.0
WHAT(5) = upper bound of the particle id-numbers (or corresponding name) for which decay direction
biasing is to be applied
(“. . . to particle WHAT(5). . . ”)
Default = WHAT(4) if WHAT(4) > 0.0, 64.0 otherwise
WHAT(1) : biasing parameter for decay length or life, applying only to unstable particles (with particle
numbers ≥ 8). Its meaning differs depending on the value of SDUM, as described below.
WHAT(3) > 2.0: number or name of the material to which the inelastic biasing factor has to be applied.
< 0.0: resets to the default a previously assigned value
= 0.0: ignored if a value has been previously assigned to a specific material; otherwise: all
materials (default)
0.0 < WHAT(3) ≤ 2.0: all materials.
WHAT(4) = lower bound of the particle id-numbers (or corresponding name) for which decay or inelastic
interaction biasing is to be applied
(“From particle WHAT(4). . . ”)
Default = 1.0
WHAT(5) = upper bound of the particle id-numbers (or corresponding name) for which decay or inelastic
interaction biasing is to be applied
(“. . . to particle WHAT(5). . . ”)
Default = WHAT(4) if WHAT(4) > 0.0, 64.0 otherwise
WHAT(6) = step length in assigning numbers
(“. . . in steps of WHAT(6) ”)
Default = 1.0
SDUM = DECPRI : decay biasing, as requested by another LAM–BIAS card with SDUM = GDECAY
or blank, must be applied only to primary particles.
= DECALL : decay biasing, as requested by another LAM–BIAS card with SDUM = GDECAY
or blank, must be applied to all generations (default).
= INEPRI : inelastic hadronic interaction biasing, as requested by another LAM–BIAS card
with SDUM = blank, must be applied only to primary particles.
= INEALL : inelastic hadronic interaction biasing, as requested by another LAM–BIAS card
with SDUM = blank, must be applied to all generations (default)
Default (option LAM–BIAS not given): no decay length, decay direction, or inelastic interaction biasing
Notes
1 − cos(θ)
−
5. The biasing function for the decay direction is of the form e λ where θ is the polar angle between
the sampled direction and the preferential direction (transformed to the centre of mass reference system). The
degree of biasing is largest for small positive values of λ (producing direction biasing strongly peaked along
the direction of interest) and decreases with increasing λ. Values of λ ≥ 1.0 result essentially in no biasing.
6. Biasing of hadronic inelastic interaction length can be done either in one single material (indicated by WHAT(3))
or in all materials (default). No other possibility is foreseen for the moment.
7. When choosing the Russian Roulette alternative, it is suggested to set also a weight window (options
WW–FACTOR and WW–THRESh, p. 270, 275) in order to avoid too large weight fluctuations.
8. Reduction factors excessively small can result in an abnormal increase of the number of secondaries to be loaded
on the stack, especially at high primary energies. In such cases, Fluka issues a message that the secondary
could not be loaded because of a lack of space. The weight adjustment is modified accordingly (therefore the
results are not affected) but if the number of messages exceeds a certain limit, the run is terminated.
9. Biasing of the hadronic inelastic interaction length can be applied also to photons, electrons and positrons
(provided option PHOTONUC is also requested with the relevant parameters, p. 198) and muons (provided
option MUPHOTON is also requested, p. 178); actually, it is often a good idea to do this in order to increase
the probability of photonuclear interaction.
10. For photons, a typical reduction factor of the hadronic inelastic interaction length is the order of 0.01 to 0.05
for a shower initiated by 1 GeV photons or electrons, and of 0.1 to 0.5 for one at 10 TeV. For electrons and
positrons, a typical reduction factor is about 5 ∗ ×10−4
7.38 LOW–BIAS
Requests non-analogue absorption and/or an energy cutoff during low-
energy neutron transport on a region by region basis
WHAT(1) > 0.0: group cutoff (neutrons in energy groups with number ≥ WHAT(1) are not trans-
ported).
This value can be overridden in user routine UBSSET (p. 370) by assigning a value to
variable IGCUTO
Default = 0.0 (no cutoff)
WHAT(2) > 0.0: group limit for non-analogue absorption (neutrons in energy groups ≥ WHAT(2) un-
dergo non-analogue absorption).
Non-analogue absorption is applied to the NMGP-WHAT(2)+1 groups with energies equal
or lower than those of group WHAT(2) if WHAT(2) is not > NMGP, otherwise it isn’t
applied to any group (NMGP is the number of neutron groups in the cross section li-
brary used: it is = 260 in the standard Fluka neutron library).
This value can be overridden in user routine UBSSET (p. 370) by assigning a value to
variable IGNONA.
Default : if option DEFAULTS is used with SDUM = CALORIMEtry, ICARUS, NEUTRONS or
PRECISIOn, the default is = NMGP+1 (usually 261), meaning that non-analogue ab-
sorption is not applied at all.
If DEFAULTS is missing, or is present with any other SDUM value, the default is the
number of the first thermal group (usually 230).
WHAT(3) > 0.0: non-analogue survival probability. Must be ≤ 1.0
This value can be overridden in user routine UBSSET (p. 370) by assigning a value to
variable PNONAN.
Default : if option DEFAULTS is used with SDUM = EET/TRANsmut, HADROTHErapy,
NEW–DEFAults or SHIELDINg, the default is 0.95.
If DEFAULTS is missing, or is present with any other SDUM value, the default is 0.85.
WHAT(4) = lower bound of the region indices (or corresponding name) in which the indicated neutron
cutoff and/or survival parameters apply
(“From region WHAT(4). . . ”)
Default = 2.0
WHAT(5) = upper bound of the region indices (or corresponding name) in which the indicated neutron
cutoff and/or survival parameters apply
(“. . . to region WHAT(5). . . ”)
Default = WHAT(4)
Default (option LOW–BIAS not given): the physical survival probability is used for all groups except
thermal ones, which are assigned a probability of 0.85. However, if option DEFAULTS has
been chosen with SDUM = EET/TRANsmut, HADROTHErapy, NEW–DEFAults or SHIELDINg, this
default value is changed to 0.95.
If SDUM = CALORIMEtry, ICARUS, NEUTRONS or PRECISIOn, the default is equal to the physical
survival probability for all groups, including thermal.
See also Table 7.1 at p. 97.
Input Commands 155
Notes
1. The groups are numbered in decreasing energy order (see Chap. 10 for a detailed description). Setting a group
cutoff larger than the last group number (e.g., 261 when using a 260-group cross section set) results in all
neutrons been transported, i.e., no cutoff is applied.
2. Similarly, if WHAT(2) is set larger than the last group number, non-analogue neutron absorption isn’t applied
to any group (this is recommended for calorimetry studies and all cases where fluctuations and correlations are
important).
Σabs
3. The survival probability is defined as 1− where Σabs is the inverse of the absorption mean free path and
ΣT
ΣT the inverse of the mean free path for absorption plus scattering (total macroscopic cross section). The
LOW–BIAS option allows the user to control neutron transport by imposing an artificial survival probability
and corrects the particle weight taking into account the ratio between physical and biased survival probability.
4. In some programs (e.g., Morse) [60] the survival probability is always forced to be = 1.0. In Fluka, if the
LOW–BIAS option is not chosen, the physical survival probability is used for all non-thermal groups, and the
default 0.85 is used for the thermal groups. (The reason for this exception is to avoid endless thermal neutron
scattering in materials having a low thermal neutron absorption cross section). To get the physical survival
probability applied to all groups, as needed for fully analogue calculations, the user must use LOW–BIAS with
WHAT(2) larger than the last group number.
5. In selecting a forced survival probability for the thermal neutron groups, the user should have an idea of the
order of magnitude of the actual physical probability. The latter can take very different values: for instance it
can range between a few per cent for thermal neutrons in 10 B to about 80-90 % in Lead and 99 % in Carbon.
Often, small values of survival probability will be chosen for the thermal groups in order to limit the length of
histories, but not if thermal neutron effects are of particular interest.
6. Concerning the other energy groups, if there is interest in low-energy neutron effects, the survival probability for
energy groups above thermals in non-hydrogenated materials should be set at least = 0.9, otherwise practically
no neutron would survive enough collisions to be slowed down. In hydrogenated materials, a slightly lower
value could be acceptable. Setting less than 80 % is likely to lead to erroneous results in most cases.
7. Use of a survival probability equal or smaller than the physical one is likely to introduce important weight
fluctuations among different individual particles depending on the number of collisions undergone. To limit the
size of such fluctuations, which could slow down statistical convergence, it is recommended to define a weight
window by means of options WW–THRESh (p. 275), WW–FACTOr (p. 270) and WW-PROFIle (p. 273).
7.39 LOW–DOWN
Biases the downscattering probability during low-energy neutron transport
on a region by region basis
WHAT(2) = biasing factor for down-scattering from group IG-1 into group IG.
This value can be overridden in user routine UBSSET (p. 370) by assigning a value to variable
FDOWSC.
Default = 1.5
WHAT(4) = lower bound of the region indices (or corresponding name) in which downscattering biasing
is to be applied
(“From region WHAT(4). . . ”)
Default = 2.0
WHAT(5) = upper bound of the region indices (or corresponding name) in which downscattering biasing
is to be applied
(“. . . to region WHAT(5). . . ”)
Default = WHAT(4)
WHAT(6) = step length in assigning indices.
(“. . . in steps of WHAT(6) ”)
Default = 1.0
SDUM : not used
Default (option LOW–DOWN not given): no downscatter biasing
Notes
1. This option can be useful only in very particular problems, for instance to calculate the response of instruments
based on moderation (Bonner spheres, rem-counters). Very powerful but also very dangerous, it can lead to
errors of orders of magnitude if not used by experts.
2. The groups are numbered in decreasing energy order (see Chap. 10 for a detailed description).
3. When this option is used, the natural probabilities of scatter from group I to group J, P(I→J), are altered
by an importance factor V(J). Selection of the outgoing group J is made from a biased distribution function
P(I→J)·V(J) with an associated weight correction.
7.40 LOW–MAT
Sets the correspondence between Fluka materials and low-energy neutron
cross sections
WHAT(1) = number or name of the Fluka material, either taken from the list of standard Fluka
materials (see Tables 10.4.1, p. 323 and 5.3, p. 47, or defined via a MATERIAL option (p. 163).
Default : No default!
WHAT(5) = (Not implemented!) compound material if > 0.0. This applies only to pre-mixed low-
energy neutron compound materials, which could possibly be available in the future; at the
moment however, none is yet available. (It would be allowed anyway only if the corresponding
Fluka material is also a compound).
Default : compound if the Fluka material is a compound, otherwise not.
WHAT(6) = (Not implemented!) atomic or molecular density (in atoms/(10−24 cm3 ), or number of
atoms contained in a 1-cm long cylinder with base area = 1 barn.
To be used only if referring to a pre-mixed compound data set (see COMPOUND, Note 7,
p. 85 and explanation of WHAT(5) here above).
Note that no such data set has been made available yet.
SDUM = name of the low-energy neutron material.
Default : same name as the Fluka material.
Default (option LOW–MAT not given): correspondence between Fluka and low-energy neutron materi-
als is by name; in case of ambiguity the first material in the relevant list (see Table 10.3, p. 325)
is chosen.
Notes
1. Each material in the Fluka low-energy neutron libraries (see Chap. 10) is identified by an alphanumeric name
(a string of ≤ 8 characters, all in upper case), and by three integer numbers. Correspondence with Fluka
materials (standard or user-defined) is based on any combination of name and zero or more identifiers. In case
of ambiguity, the first material in the list fulfilling the combination is selected.
2. Option LOW–MAT should be avoided if it is not really necessary (experience has shown that it is often misin-
terpreted by beginner users). The option is not required if the following 3 conditions are all true:
i ) the low-energy neutron material desired is unique or is listed before any other material with the same
name in Table 10.3
and
ii ) that name is the same as one in the Fluka list (Table 10.3) or as given by a MATERIAL option
and
iii ) there is only one Fluka material associated with that low-energy neutron material
Input Commands 159
3. On the other hand, the option is required in any one of the following cases:
i ) there is more than one low-energy neutron material with that name (this can happen because of data sets
coming from different sources, or corresponding to different neutron temperatures, or concerning different
isotopes, or weighted on different spectra, etc), and the one desired is not coming first in the list. In this
case it is sufficient to provide just as many identifiers as required to remove ambiguity
or
ii ) The Fluka name is different from the name of the low-energy neutron material
or
iii ) There is more than one Fluka material associated with the given low-energy neutron material. This
can happen for instance when the same material is present with different densities in different regions.
In reality this is a special case of ii) above, since to distinguish between the different densities, different
names must be used and one at least will not be equal to the name of the low-energy neutron material.
4. (Not implemented!) If WHAT(5) is set > 0.0 because a pre-mixed compound low-energy neutron material
is used, average cross sections are used (as for instance in the Morse code). Otherwise, if each of the Fluka
elemental components has been associated with one of the elemental low-energy neutron components and the
composition of the compound has been defined by a COMPOUND option, low-energy neutron interactions will
take place randomly with each individual component, with the appropriate probability.
It is however possible to have in the same run detailed individual interactions at high energies and average
compound interactions for low-energy neutrons. But not the other way around!
7.41 LOW–NEUT
WHAT(1) = number of neutron groups in the neutron cross section library used. The Fluka standard
neutron library has 260 groups (see Chap. 10).
Default = 260.0
WHAT(4) = printing flag (see p. 304 for a detailed description of the printed output):
from 0.0 to 3.0 increases the amount of output about cross sections, kerma factors, etc.:
1.0: Standard output includes integral cross sections , kerma factors and probabilities
2.0: In addition to the above, downscattering matrices and group neutron-to-gamma transfer
WHAT(5) = number of neutron groups to be considered as thermal ones. (The standard Fluka neutron
library has 31 thermal groups).
= 0.0: ignored
> 0.0: resets to the default = 31.0
Default = 31.0
WHAT(6) = i0 + 10 · i1 :
i0 = 1: available pointwise cross sections are used (see important details in Note 5 below),
explicit and correlated secondary generation for 10 B(n,α)7 Li is activated, as well as correlated
photon cascade for x Xe(n,γ)x+1 Xe and 113 Cd(n,γ)114 Cd
= 0: ignored
≤ -1: resets to the default (pointwise cross sections are not used)
i1 = 1: fission neutron multiplicity is forced to 1, with the proper weight
= 0: ignored
≤ -1: resets to the default (normal fission multiplicity)
Default = -11., unless option DEFAULTS has been chosen with SDUM = CALORIMEtry,
ICARUS, NEUTRONS or PRECISIOn, in which case the default is 1.0 (pointwise treat-
ment – see Note 5 – and generation of secondary charged particles and correlated
photon cascades are performed when available, and fission multiplicity is not forced)
SDUM : not used
Input Commands 161
Default (option LOW–NEUT not given): if option DEFAULTS has been chosen with
SDUM = CALORIMEtry, EET/TRANsmut, HADROTHErapy, ICARUS, NEUTRONS, NEW–DEFAults,
PRECISIOn or SHIELDINg, low-energy neutrons are transported and a suitable cross section
library must be available.
In all other cases, low-energy neutrons are not transported, and their energy is deposited as
explained in Note 2 below.
Notes
1. In Fluka, transport of neutrons with energies lower than a certain threshold is performed by a multigroup
algorithm. For the neutron cross section library currently used by Fluka, this threshold is 0.020 GeV. The
multigroup transport algorithm is described in Chap. 10.
2. If low-energy neutrons are not transported (because of the chosen DEFAULTS, or because so requested by the
user, see Note 3) the energy of neutrons below threshold (default or set by PART–THR, p. 196) is deposited on
the spot. This is true also for evaporation neutrons.
3. If there is no interest in transporting low-energy neutrons, but this feature is is implicit in the DEFAULTS
option chosen, it is suggested to use PART–THRes (p. 196) with an energy cutoff WHAT(1) = 0.020. However,
even in this case the availability of the low-energy neutron cross sections for the materials defined in input is
checked. To avoid the run being stopped with an error message, the user should issue a LOW–MAT command
for each material for which cross sections are missing, pointing them to any available material.
4. Gamma data are used only for capture gamma generation and not for transport (transport is done via the
ElectroMagnetic Fluka module Emf using continuous cross sections). The actual precise energy of a photon
generated by (n,γ) or by inelastic reactions such as (n,n’) is sampled randomly within the gamma energy
group concerned, except for a few important reactions where a single monoenergetic photon is emitted, as
the 1 H(n,γ)2 H reaction where the actual photon energy of 2.226 MeV is used. It is possible to get (single or
correlated) physical gammas also for the capture in 6 Li, 10 B, 40 Ar, x Xe and 113 Cd, by setting WHAT(6) = 1.0
or 11.0 (see Note 5 for the additional requirement applying to 6 Li and 40 Ar).
5. Pointwise neutron transport, fully alternative to the groupwise one, is available only for 1 H (above 10 eV)
and 6 Li (all reactions), by setting WHAT(6) = 1.0 or 11.0. In the case of 6 Li, in order to get the pointwise
treatment it is mandatory to define the respective monoisotopic material through a MATERIAL card and name
it LITHIU-6 (or with a character string containing LI-6 or 6-LI).
Pointwise treatment has been developed also for 40 Ar but with some limitations making it not suitable for all
applications. It requires an additional cross section file to be requested to the authors. In addition, as for
6
Li, one has to define the respective monoisotopic material and call it ARGON-40 (or with a character string
containing AR-40 or 40-AR). The physical gammas from the 40 Ar(n,γ)41 Ar capture reaction can be obtained
only in the context of the pointwise treatment.
14
6. Recoil protons are always transported explicitly, and so is the proton from the N(n,p)14 C reaction.
7. The groups are numbered in decreasing energy order (see Chap. 10 for a detailed description). The energy
limits of the thermal neutron groups in the standard Fluka neutron library are reported in 10.4.1.1.
8. Here are the settings for transport of low-energy neutrons corresponding to available DEFAULTS SDUM options:
CALORIMEtry, ICARUS, NEUTRONS, PRECISIOn: low-energy neutrons are transported, with generation of
charged secondaries, correlated photon cascades and use of pointwise cross sections when available
EET/TRANsmut, HADROTHErapy, NEW–DEFAults (or DEFAULTS missing), SHIELDINg: low-energy neu-
trons are transported using always multigroup cross sections
Any other SDUM value of DEFAULTS: no low-energy neutron transport
9. If treatment of low energy neutrons is requested, one must make sure that the transport threshold for neutrons
(set with PART–THR) be equal to the minimum energy needed for neutron transport (typically 10−5 eV). Please
note that the behaviour of the PART–THR option for neutrons has changed with respect to past releases.
Example:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
LOW-NEUT 260.0 42.0 0.020 2.0 31.0 11.0
* The low-energy neutron library used is the (260n, 42gamma) standard
162 LOW–NEUT
7.42 MATERIAL
WHAT(1) = atomic number (meaningful only when not coupled to a COMPOUND card; otherwise WHAT(1)
must be = 0.0)
No default
WHAT(2) = NOT to be filled. It allows to overwrite the default value of atomic weight (in g/mole)
Default : computed according to the natural composition of an element with atomic number
WHAT(1) or to the identity of its isotope specified by WHAT(6). Meaningless if coupled
to a COMPOUND card.
WHAT(3) = density in g/cm3 . Note that if the density is lower than 0.01, the material is considered to
be a gas at atmospheric pressure unless set otherwise by MAT–PROP (p. 165)
No default
WHAT(4) = number (index) of the material. NOT to be filled in case of name-based input.
Default = NMAT+1 (NMAT is the current number of defined materials. Its value is = 25 before
any MATERIAL card is given, and doesn’t change if WHAT(4) overrides a number which
has already been assigned), but for predefined materials in name-based inputs
WHAT(5) ≥ 2.0: alternate material number (or name, in name-based input) for ionisation processes
(this material will be used instead of WHAT(1) for dE/dx etc.)
> 0.0 and ≤ 2.0: ignored
WHAT(6) = mass number of the material. Only integer values (still in real format) make sense.
Default =0 i.e. natural isotopic composition of the WHAT(1) element (but see Note 8). For
isotopic composition other than natural or single isotope, see COMPOUND.
SDUM = name of the material
No default
Default (option MATERIAL not given): standard pre-defined material numbers are used (see list in
Table 5.3, p. 47).
Notes
1. MATERIAL cards can be used in couple with COMPOUND cards in order to define compounds, mixtures or
isotopic compositions. See COMPOUND for input instructions (p. 84).
2. Material number 1.0 is always Black Hole (called also External Vacuum) and it cannot be redefined. (All
particles vanish when they reach the Black Hole, which has an infinite absorption cross section)
3. Material number 2.0 is always Vacuum (of zero absorption cross section) and it cannot be redefined.
4. In name-based inputs it is recommended to omit the number of the material (and use its name in COMPOUND
and ASSIGNMAt commands). On the contrary, if the input is number-based, it is not recommended to omit it.
5. In an explicitely number-based input (declared as such by WHAT(4)= 4.0 in command GLOBAL) it is allowed
to redefine a material name overriding a number already assigned (either by default, see list in Table 5.3, or
by a previous MATERIAL card), or by using a new number.
If the number has not been assigned before, it must be the next number available (26.0, 27.0. . . for successive
MATERIAL cards). In a number-based input, it is dangerous to leave empty gaps in the number sequence,
164 MATERIAL
although the program takes care of redefining the number: in fact, the incorrect number is likely to be still
used in other commands such as ASSIGNMAt and COMPOUND, leading to crashes or to undetected errors.
If the input is name-based and the number is not given explicitely, the program automatically assigns it and
the number sequence is automatically respected. The assigned number can be read from standard output, but
the user only needs to refer to that material by its name in other input cards.
6. Materials having a different density at the macroscopic and at the microscopic level (e.g., spongy matter or
approximations for not entirely empty vacuum) need a special treatment regarding stopping power (density
effect). In such cases, see MAT–PROP, p. 165.
7. If low-energy neutron transport is desired, the material name must coincide with that of one of the low-energy
neutron cross section materials in the Fluka library (see 10.4), or a correspondence must be set using option
LOW–MAT, p. 158.
8. If the card concerns an element that does not exist in nature, setting WHAT(6) = 0.0 cannot provide the
natural isotopic composition. Therefore a single isotope will be selected (usually the one with the longest
half-life). To avoid confusion, it is suggested to declare explicitly instead the isotope desired.
9. The largest atomic number that can be handled by Fluka is 100.
Example:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
MATERIAL 1. 8.988E-5 0.0 1. HYDROGEN
LOW-MAT HYDROGEN 1. 11. 296. 0.0 0. HYDROGEN
MATERIAL 6. 2.265 0.0 0. CARBON
MATERIAL 6. 2.0 0.0 0. GRAPHITE
LOW-MAT CARBON 6. -3. 296. 0.0 0. GRAPHITE
MATERIAL 41. 8.57 0.0 0. NIOBIUM
MATERIAL 48. 8.650 0.0 0. CADMIUM
MATERIAL 24. 7.19 0.0 0. CHROMIUM
MATERIAL 27. 8.90 0.0 0. COBALT
* Several (name-based) cases are illustrated:
* Hydrogen, pre-defined as material 3, is re-defined as monoisotopic 1-H.
* Command LOW-MAT has been added to force this material to be mapped to
* CH2-bound 1-H for what concerns low energy neutron transport.
* Carbon, pre-defined as material 6.0, is re-defined with a different density,
* and is also redefined with a different name (GRAPHITE), mapped to
* graphite-bound carbon.
* Niobium, Cadmium, Chromium and Cobalt are added to the list.
Input Commands 165
7.43 MAT–PROP
Provides extra information about materials
1. to supply extra information about gaseous materials and materials with fictitious or effective density
2. to override the default average ionisation potential
3. to set a flag to call the user routine USRMED (p. 373) every time a particle is going to be transported in
selected material(s)
4. to set the energy threshold for DPAs (Displacements Per Atom)
For SDUM 6= DPA–ENER, USERDIREctive:
WHAT(2) = RHOR factor: this factor multiplies the density of a material when calculating the density
effect parameters (e.g., if a reduced density is used to simulate voids, but of course the
density effect parameters must be computed with the actual local physical density at the
microscopic level). See Note 3 below.
= 0.0: ignored
< 0.0: a possible previously input value is restored to default = 1.0
Default = 1.0
WHAT(3) > 0.0: average ionisation potential to be used for dE/dx calculations (eV)
< 0.0: a default value of the average ionisation potential is obtained from the systematics of
Ziegler [209] or Sternheimer, Berger and Seltzer [194, 195]
= 0.0: ignored
Default : ionisation potential calculated from systematics
WHAT(4) = lower bound of the indices of materials, or corresponding name, in which gas pressure, RHOR
factor or ionisation potential are set
(“From material WHAT(4). . . ”)
Default = 3.0
WHAT(5) = upper bound of the indices of materials, or corresponding name, in which gas pressure, RHOR
factor or ionisation potential are set
(“. . . to material WHAT(5). . . ”)
Default = WHAT(4)
Default : (option MAT–PROP not given): if the density of the default material or that assigned by a
MATERIAL card is > 0.01, the material is not assumed to be a gas. Otherwise it is a gas
at a default pressure of 1 atmosphere. If the material is a compound, the average ionisation
potential is that resulting from applying Bragg’s rule of additivity to stopping power.
166 MAT–PROP
WHAT(1) > 0.0: Damage energy threshold (eV) for the given materials. (see Note 5)
= 0.0: ignored
Default = 30 eV
WHAT(2) and WHAT(3): not used
WHAT(4) = lower bound of the indices of materials, or corresponding name, in which the in which the
damage energy threshold has to be applied
(“From material WHAT(4). . . ”)
Default = 3.0
WHAT(5) = upper bound of the indices of materials, or corresponding name, in which the in which the
damage energy threshold has to be applied
(“. . . to material WHAT(5). . . ”)
Default = WHAT(4)
Default (option MAT–PROP not given): Damage energy threshold = 30 eV for all materials
WHAT(4) = lower bound of the indices of materials, or corresponding name, in which the call to USRMED
must be performed
(“From material WHAT(4). . . ”)
Default = 3.0
WHAT(5) = upper bound of the indices of materials, or corresponding name, in which the call to USRMED
must be performed
(“. . . to material WHAT(5). . . ”)
Default = WHAT(4)
Default (option MAT–PROP not given): no extra information about the assigned materials is supplied
Input Commands 167
Notes
1. When issuing a MATERIAL definition the gas pressure is set to 1 atm if the density RHO is < 0.01. If this value
is not acceptable to the user, a MAT–PROP card must be issued after the MATERIAL card to force a different
value of the gas pressure. Note that this is one of the rare cases (with GLOBAL, DEFAULTS and PLOTGEOM)
where sequential order of input cards is of importance in Fluka.
A non-zero value of WHAT(1) must be given only for gases: it is important when calculating the density effect
parameters of the stopping power (see Note 1 to option STERNHEIme, p. 234, and Note 2 here below) .
2. If WHAT(1) is set to a value > 0.0, the transport of charged particles will be calculated according to a density
RHO defined at the actual pressure by the corresponding MATERIAL card, while the density effect correction
to stopping power will be calculated using a density ρ(NTP) = RHO/WHAT(1) and then re-scaled to the actual
density RHO.
3. When giving a WHAT(2) non-zero value, remember that if RHO (defined by a MATERIAL card) indicates the
“transport (effective) density”, the “physical density” used to calculate the density effect on stopping power
will be RHOR*RHO = WHAT(2)*RHO.
4. Displacement damage can be induced by all particles produced in a cascade, including high energy photons.
The latter, however, have to initiate a reaction producing charged particles, neutrons or ions.
5. The damage threshold is the minimum energy needed to produce a defect. Typical values used in the Njoy99
code [141] are:
Li: 10 eV, C in SiC: 20 eV, Graphite: 30· · · 35 eV, Al: 27 eV, Si: 25 eV, Mn, Fe, Co, Ni, Cu, Nb: 40 eV, Mo:
60 eV, W: 90 eV, Pb: 25 eV
6. In most problems, the expected DPA values are generally expressed by very small numbers.
SDUM = USERDIREctive:
7. User routine USRMED is typically used to implement albedo and refraction, especially in connection with optical
photon transport as defined by OPT–PROP (p. 187). See 13.2.28 for instructions.
Example 2:
* Lung tissue with ICRP composition and Sternheimer parameters
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
MATERIAL 1. 0.0 8.3748E-5 3. 0.0 1. HYDROGEN
MATERIAL 6. 0.0 2.265 6. 0.0 0. CARBON
MATERIAL 7. 0.0 0.0011653 7. 0.0 0. NITROGEN
MATERIAL 8. 0.0 0.001429 8. 0.0 0. OXYGEN
MATERIAL 12. 0.0 1.74 9. 0.0 0. MAGNESIU
MATERIAL 11. 0.0 0.971 10. 0.0 0. SODIUM
MATERIAL 26. 0.0 7.874 11. 0.0 0. IRON
MATERIAL 16. 0.0 2.0 12. 0.0 0. SULFUR
MATERIAL 17. 0.0 2.9947E-3 13 0.0 0. CHLORINE
MATERIAL 19. 0.0 0.862 14. 0.0 0. POTASSIU
MATERIAL 15. 0.0 2.2 16. 0.0 0. PHOSPHO
MATERIAL 30. 0.0 7.133 17. 0.0 0. ZINC
MATERIAL 20. 0.0 1.55 21. 0.0 0. CALCIUM
* Average density of lung is 0.3 g/cm3
MATERIAL 0.0 0.0 0.3 18. 0.0 0. LUNG
COMPOUND -0.101278 3. -0.10231 6. -0.02865 7. LUNG
COMPOUND -0.757072 8. -0.00184 10. -0.00073 9. LUNG
COMPOUND -0.0008 16. -0.00225 12. -0.00266 13. LUNG
COMPOUND -0.00194 14. -0.00009 21. -0.00037 11. LUNG
COMPOUND -0.00001 17. 0. 0. 0. 0. LUNG
* Local density of lung is 1.05 = 0.3*3.50 g/cm3. Average ionisation
* potential is 75.3 eV (At. Data Nucl. Data Tab. 30, 261 (1984))
MAT-PROP 0.0 3.50 75.3 18. 0. 0.
STERNHEI 3.4708 0.2261 2.8001 0.08588 3.5353 0. 18
7.44 MCSTHRESh
Defines some of the accuracy requirements for Multiple Coulomb Scattering
(MCS) of heavy charged particles (hadrons and muons).
WHAT(1) ≥ 0.0: detailed multiple Coulomb scattering for primary charged hadrons and muons down
to the minimum energy allowed by Molière’s theory
< 0.0: detailed multiple Coulomb scattering for primary charged hadrons and muons down
to a kinetic energy equal to |WHAT(1)| (GeV)
Default = 1.0 if option DEFAULTS (p. 92) has been chosen with SDUM = CALORIMEtry,
HADROTHErapy, ICARUS or PRECISIOn.
If SDUM = EET/TRANsmut, the default is = -0.01 (transport of primaries with
multiple Coulomb scattering down to 10 MeV).
With any other SDUM value, or if DEFAULTS is missing, the default is = -0.02
(transport of secondaries with multiple Coulomb scattering down to 20 MeV).
WHAT(2) ≥ 0.0: detailed multiple Coulomb scattering for secondary charged hadrons and muons down
to the minimum energy allowed by Molière’s theory
< 0.0: detailed multiple Coulomb scattering for secondary charged hadrons and muons down
to a kinetic energy equal to |WHAT(2)| (GeV)
Default = 1.0 if DEFAULTS has been chosen with SDUM = CALORIMEtry, HADROTHErapy,
ICARUS or PRECISIOn.
If SDUM = EET/TRANsmut, NEW–DEFAults or SHIELDINg, the default is = -0.02
(transport of secondaries with multiple Coulomb scattering down to 20 MeV).
With any other SDUM value, or if DEFAULTS is missing, the default is = -1.0
(transport of secondaries with multiple Coulomb scattering down to 1 GeV).
WHAT(3) – WHAT(6), SDUM: not used
Default : (option MCSTHRES not given): the defaults depend on option DEFAULTS as explained above
and in Note 6. See also Table 7.1 on p. 97.
Notes
1. The MCSTHRES option is not used often, since option DEFAULTS ensures the MCS parameter setting most
appropriate for a wide range of problems. In most cases, it is suggested to have multiple Coulomb scattering
fully activated for both primary and secondary particles over the whole energy range. This corresponds to
using WHAT(1) ≥ 0.0 and WHAT(2) ≥ 0.0 (or at least WHAT(2) < 0.0 with an absolute value much smaller
than beam energy).
2. WHAT(1) < 0.0 with |WHAT(1)| not much smaller than primary energy should generally be avoided. should
generally be avoided. The reason is twofold:
(a) tracking accuracy would be spoiled for no substantial gain in speed
(b) Fluka tracking without MCS does not take into account the variation of nuclear interaction cross section
with energy
3. However, there are some cases where it can be useful to set WHAT(1) and/or WHAT(2) to a negative number
with absolute value larger than beam energy. In this case no MCS is performed but tracking and maxi-
mum energy loss per step are controlled anyway by the most sophisticated transport algorithm available (see
FLUKAFIX, p. 133 and STEPSIZE, p. 232).
Complete suppression of multiple scattering can be useful in some particular cases, for instance when replac-
ing a gas of extremely low density by a gas of the same composition but of much larger density in order to
increase the frequency of inelastic interactions (of course, the results must then be scaled by the density ra-
tio). In such cases, one should also select the biased density so that no re-interaction of secondaries can take
place. An alternative way to switch off completely multiple Coulomb scattering of hadrons and muons is to use
MULSOPT (p. 174) with WHAT(2) ≥ 3.0 (MULSOPT, however, can deal also with electrons and positrons,
while MCSTHRES can’t; on the other hand, MULSOPT does not allow to distinguish between primary and
secondary particles ).
Input Commands 171
4. In order to get the most accurate treatment of Multiple Coulomb Scattering, a step optimisation and higher
order corrections can be requested by option MULSOPT (but with an important increase in CPU time).
5. In pure electromagnetic or low-energy neutron problems, option MCSTHRES does not need to be given and
has no effect.
6. Here are the MCS settings corresponding to available DEFAULTS options:
CALORIMEtry, HADROTHErapy, ICARUS, PRECISIOn: Multiple scattering threshold at minimum allowed
energy both for primary and secondary charged particles
EET/TRANsmutation: MCS threshold = 10 MeV for primaries and 20 MeV for secondaries
NEW–DEFAults (or DEFAULTS missing), SHIELDING: 20 MeV threshold for both primaries and secondaries
Any other SDUM value: 20 MeV for primaries and 1 GeV for secondaries
7.45 MGNFIELD
Sets the tracking conditions for transport in magnetic fields and also may
define an homogeneous magnetic field.
WHAT(1) = largest angle in degrees that a charged particle is allowed to travel in a single step
Default = 57.0 (but a maximum of 30.0 is recommended!)
WHAT(2) = upper limit to error of the boundary iteration in cm (minimum accuracy accepted in deter-
mining a boundary intersection). It also sets the minimum radius of curvature for stepping
according to WHAT(1)
Default = 0.05 cm.
WHAT(3) = minimum step length if the step is forced to be smaller because the angle is larger than
WHAT(1).
Default = 0.1 cm.
WHAT(4) – WHAT(6): = Bx , By , Bz components of magnetic field on the coordinate axes (in tesla).
Default (Bx = By = Bz = 0.0): a user-supplied subroutine MAGFLD (p. 356) is assumed to
provide the actual values (see Notes 2 and 3 below)
SDUM : not used
Default (option MGNFIELD not given): the defaults indicated for WHAT(1–6) apply if a magnetic field
exists in the current region because of an ASSIGNMAt command (p. 66).
Notes
1. If Bx = By = Bz = 0.0, the user-written subroutine MAGFLD is called at each step to get the direction cosines
and the module (in tesla) of the magnetic field as a function of region or of coordinates. A sample subroutine is
provided with the Fluka code; instructions on how to write user-supplied routines can be found in Chap. 13.
2. Note that the argument list of subroutine MAGFLD is (X,Y,Z,BTX,BTY,BTZ,B,NREG,IDISC), where BTX, BTY,
BTZ are the direction cosines of the magnetic field at point X,Y,Z (not the components of the field! The field
magnitude is given by B). For this reason, it is imperative that MAGFLD return normalised values of BTX, BTY
and BTZ such that the sum of their squares be = 1.0 in double precision.
Three zero values are not accepted: if the field is zero at the point concerned, you must return for instance
0.0, 0.0, 1.0 and B = 0.0.
On the contrary, note that Bx , By , Bz in the MGNFIELD option (p. 172), given by WHAT(4). . . WHAT(6) as
described above, are the field components and not the cosines.
3. Magnetic field tracking is performed only in regions defined as magnetic field regions by command ASSIGNMAt
(p. 66). It is strongly recommended to define as such only regions where a non-zero magnetic field effectively
exists, due to the less efficient and accurate tracking algorithm used in magnetic fields.
To define a region as having a magnetic field and to return systematically B = 0.0 in that region via subroutine
MAGFLD, is not allowed.
4. The maximum error on the boundary iteration, WHAT(2), must be compatible with the minimum linear
dimension of any region.
5. It is recommended to activate also option STEPSIZE (p. 232) inside and close to regions where a magnetic field
is present. That option can be used to set a minimum and a maximum step size (in cm) for every region.
6. In case of conflict, WHAT(3) overrides the step size requested by option STEPSIZE. Therefore, it is suggested
to set it not larger than the latter. The purpose of this constraint is to avoid tracking in detail low-energy
particles along a helix of very small radius, by forcing several turns into a single step (all the energy will be
deposited at the same point).
7.46 MULSOPT
Sets the tracking conditions for multiple Coulomb scattering (MCS), for
both hadrons/muons and e+ e− . Can also be used to activate single scatter-
ing.
WHAT(1) : controls the step optimisation for multiple Coulomb scattering (see Note 1) and the number
of single scatterings on a material by material basis
≤ -1.0: a possible previous request of optimisation is cancelled and the number of single
scatterings in the materials indicated by WHAT(4)–WHAT(6) is reset to the default
value (i.e., 0. or the global default possibly set previously by the MULSOPT option
with SDUM = GLOBAL/GLOBHAD/GLOBEMF)
= 0.0: ignored
= i0 + i1 × 10 + i2 × 100000, with 0 ≤ i0 ≤ 1, 0 ≤ i1 ≤ 10000, 0 ≤ i2 ≤ 10000:
i0 ≥ 1 : the optimisation is activated
i1 − 1 = number of single scattering steps for hadrons and muons in the materials indicated
by WHAT(4)–WHAT(6)
i1 = 0 : ignored
i2 − 1 = number of single scattering steps for electrons and positrons in the materials
indicated by WHAT(4)–WHAT(6)
i2 = 0 : ignored
|WHAT(2)| = 1.0: spin-relativistic corrections are activated for charged hadrons and muons at the 1st
Born approximation level
|WHAT(2)| = 2.0: spin-relativistic corrections are activated for hadrons and muons at the 2nd Born
approximation level
WHAT(2) < 0.0: nuclear finite size effects (form factors) are activated (see Note 2).
= -3.0: nuclear finite size effects are considered but not the spin-relativistic effects
≥ 3.0: multiple scattering for hadrons and muons is completely suppressed (see Note 3).
|WHAT(3)|= 1.0: spin-relativistic corrections activated for e± in the 1st Born approximation
|WHAT(3)|= 2.0: spin-relativistic corrections activated for e± in the 2nd Born approximation
WHAT(3) < 0.0: nuclear finite size effects are activated
≥ 3.0: multiple scattering for e+ and e− is completely suppressed
WHAT(4) = lower bound of the indices of the materials, or corresponding name, in which the corrections
are activated
(“From material WHAT(4). . . ”)
Default = 3.0
WHAT(5) = upper bound of the indices of the materials, or corresponding name, in which the corrections
are activated
(“. . . to material WHAT(5). . . ”)
Default = WHAT(4)
Input Commands 175
SDUM = FANO–ON : Fano correction for inelastic interactions of charged hadrons and muons on
atomic electrons [64] is switched on
= FANO–OFF : Fano correction for inelastic interactions of charged hadrons and muons on
atomic electrons is switched off
= MLSH–ON : Original Molière screening angle on for hadrons and muons
= MLSH–OFF : Molière screening angle for hadrons and muons as modified by Berger & Seltzer
for e+ e− (not recommended)
Default : Fano correction on, original Molière screening angle for hadrons on
(GLOBEMF restricts the input value to e+ e− , GLOBHAD to charged hadrons and muons)
WHAT(1) : controls the minimum MCS step size used by the boundary approach algorithm for e+ e−
and charged heavy particles
≥ 0.0 and < 0.2: ignored
≥ 0.2: the minimum step size is set equal to the size corresponding to B = 5 in Molière
theory, multiplied by WHAT(1)
< 0.0: the minimum step size is reset to default
WHAT(2) : index of step stretching factor tabulation to be used by the electron/positron transport al-
gorithm when approaching a boundary.
Only for experts! Not for the normal user
The values of the index implemented for the moment are 1,2,3,4.
Values 11,12,13,14 cause the sensing algorithm to multiply the range/MCS step rather
than the current step.
Values 101,111,102,112,103,113,104,114 have the additional effect of making the algo-
rithm resample as unphysical any step cut at a boundary and “reflected” from the boundary.
= 0.0: ignored
< 0.0: the tabulation index is reset to default
Default = 1.0 (maximum accuracy)
WHAT(3) : controls the optimal step to be used by the optimisation option (and to some extent by the
hadron/muon boundary approach algorithm).
Only for experts! Not for the normal user
≥ 0.0 and < 0.2: ignored
≥ 0.2: the minimum step size is set equal to the size corresponding to B = 5 in Molière
theory [30, 136–138], multiplied by WHAT(3)
< 0.0: the minimum step is reset to its default value
Default : minimum step size equal to that corresponding to B = 5, multiplied by 20.0
WHAT(4) > 0.0: single scattering option activated at boundaries or for too short steps
< 0.0: resets to default
= 0.0: ignored
176 MULSOPT
WHAT(5) (meaningful only if single scattering is activated at boundaries and when the step is too
short: see WHAT(4) above)
> 0.0: single scattering option activated for energies too small for Molière theory to apply
< 0.0: single scattering is not activated
= 0.0: ignored
Default : single scattering is not activated
WHAT(6) (meaningful only if single scattering is activated at boundaries and when step is too short:
see WHAT(4) above)
> 0.0: number of single scatterings to be performed when crossing a boundary. To replace
multiple scattering with single scattering everywhere, see Note 5.
= 0.0: ignored
< 0.0: resets the default
Default = 1.0
Notes
1. When optimisation is requested, the program always makes the minimum step for which the Molière theory of
multiple scattering is applicable. Optimisation via MULSOPT is available only for charged hadrons and muons.
For electrons and positrons, option EMFFIX is recommended (p. 118).
2. The correction for the nuclear finite size has been implemented using simple Thomas-Fermi form factors ac-
cording to Tsai [201]. The user can provide more sophisticated values by supplying a function FORMFU which
must return the square of the nuclear form factor. See 13.2.7.
3. Complete suppression of multiple scattering can be useful in some particular cases, for instance when replacing
a gas of extremely low density by a gas of the same composition but of much larger density in order to increase
the frequency of inelastic interactions or bremsstrahlung reactions (of course, the results must then be scaled
by the density ratio). In such cases, one should also select the biased density so that no re-interaction of
secondaries can take place.
4. Runs in which the nuclear form factor is taken into account and/or the 2nd Born approximation is requested
are very CPU-time consuming at low energy (but not at high energy).
5. Setting WHAT(6) > 1000.0 with SDUM = GLOBAL, GLOBHAD or GLOBEMF, replaces systematically mul-
tiple scattering with single scattering everywhere. This choice is generally extremely demanding in CPU time,
except for particles of very low energy (a few keV), which have a very short history anyway. In such cases, the
single scattering option is even recommended [75].
Example 2:
* Maximum accuracy requested for the electron step size used in the boundary
* approach and in the optimisation algorithm. Single scattering activated for
* electrons at boundary crossing and when the step is too short for Moliere
Input Commands 177
* (but not when the energy is too low for Moliere). Boundaries will be
* crossed with 2 single scatterings.
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
MULSOPT 1.0 1.0 1.0 1.0 0.0 2. GLOBEMF
Example 3:
* Single scattering activated everywhere for all charged particles
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
MULSOPT 0.0 0.0 0.0 1.0 1.0 99999999.GLOBAL
178 MUPHOTON
7.47 MUPHOTON
Controls muon photonuclear interactions
WHAT(4) = lower bound of the indices of materials, or corresponding name, in which muon nuclear
interactions must be simulated
(“From material WHAT(4). . . ”)
Default = 3.0
WHAT(5) = upper bound of the indices of materials, or corresponding name, in which muon nuclear
interactions must be simulated
(“. . . to material WHAT(5). . . ”)
Default = WHAT(4)
Default (option MUPHOTON not given): muon nuclear interactions are not simulated
Notes
1. Other high-energy interactions of muons with nuclei (pair production, bremsstrahlung) are controlled by option
PAIRBREM (p. 194), which applies also to charged hadrons.
2. Use of WHAT(1) = 2.0 (interaction without transport of the secondaries) gives the correct muon straggling
but simulates only in an approximate way the energy deposition distribution. A similar approach is found in
A. Van Ginneken’s codes Casim and Musim [203, 204].
Example:
* Explicit pair production and bremsstrahlung requested for heavy charged
* particles in materials 12 and 13, provided the energy of the secondary
* electrons and positrons is > 500 keV. No threshold is requested for photon
* production. For muons, explicit nuclear interactions are also requested.
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
PAIRBREM 3.0 0.0 0.0005 12.0 13.0
MUPHOTON 1.0 0.0 0.0 12.0 13.0
Input Commands 179
7.48 MYRQMD
7.49 OPEN
Defines input/output files to be connected at run-time.
Notes
Examples:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
* opening the file with the random number seeds for the next run
OPEN 2. NEW
newseed.random
* the working space for Combinatorial Geometry
OPEN 16. SCRATCH
Input Commands 183
7.50 OPT–PROD
SDUM = CERE–OFF
SDUM = TRD–OFF
SDUM = SCIN–OFF
For SDUM = CERENKOV: switches on Cherenkov production and defines photon energy range
SDUM = CERENKOV
For SDUM = CEREN–WV: switches on Cherenkov production and defines photon wavelength range
SDUM = CEREN–WV
For SDUM = CEREN–OM: switches on Cherenkov production and defines photon angular frequency range
WHAT(1) = minimum Cherenkov photon angular frequency ω = 2πν in rad/s (ν = frequency) in rad/s
SDUM = CEREN–OM
For SDUM = TR–RADIA: switches on Transition Radiation production and defines its energy range
SDUM = TR–RADIA
For SDUM = SCINTILL: switches on Scintillation Light production and defines photon energy
WHAT(1) = ith scintillation photon emission energy in GeV (imax =3, see Note 4)
WHAT(2) > 0: fraction of deposited energy going into ith scintillation photon emission
≤ -100: forces to use a user routine (not yet implemented)
≥ -99.0 and ≤ 0.0: ignored
SDUM = SCINTILL
Input Commands 185
For SDUM = SCINT–WV: switches on Scintillation Light production and defines photon wavelength
WHAT(1) = ith scintillation photon emission wavelength in cm (imax =3, see Note 4)
Default = 2.50 × 10−5 (250 nm, or 1.2 × 106 GHz)
WHAT(2) > 0: fraction of deposited energy going into ith scintillation photon emission
≤ -100: forces to use a user routine (not yet implemented)
≥ -99.0 and ≤ 0.0: ignored
WHAT(3) : time constant of scintillation light in seconds
SDUM = SCINT–WV
For SDUM = SCINT–OM: switches on Scintillation Light production and defines photon angular frequency range
WHAT(1) = ith scintillation photon emission angular frequency ω = 2πν in rad/s, ν = frequency.
(imax =3, see Note 4)
Default = 3.14 × 1015 rad/s (corresponding to 600 nm)
WHAT(2) =fraction of deposited energy going into ith scintillation photon emission
≤ -100: forces to use a user routine (not yet implemented)
≥ -99.0 and ≤ 0.0: ignored
WHAT(3) : time constant of scintillation light in seconds
SDUM = SCINT–OM
WHAT(4) = lower bound of the indices of materials in which the indicated Cherenkov, Scintillation or
TRD photon emission range is defined
(“From material WHAT(4). . . ”)
Default = 3.0
WHAT(5) = upper bound of the indices of materials in which the indicated Cherenkov, Scintillation or
TRD photon emission range is defined
(“. . . to material WHAT(5). . . ”)
Default = WHAT(4)
Default : (option OPT–PROD not given): no Cherenkov, scintillation or TRD photon production
186 OPT–PROD
Notes
1. Optical photons such as those produced by Cherenkov effect are distinguished by their Fluka name
(OPTIPHOT) and by their Fluka id-number (-1), as shown in 5.1.
2. To transport optical photons, it is necessary to define the optical properties of the relevant materials by means
of option OPT–PROP (p. 187). Users can also write their own routines USRMED (p. 373), which is called at
every step and at boundary crossings when activated with MAT–PROP (p. 165), and FRGHNS (p. 354), which
defines surface roughness.
3. The energy/wavelength/frequency range as defined by OPT–PROD for Cherenkov photon production is not
necessarily the same as that defined for transport by means of OPT–PROP. The default values, however, are
the same.
4. In case of scintillation light, only monochromatic photons are considered for the moment, with a maximum of
i = 3 different lines. The lines can be defined repeating i times the OPT–PROD card with SDUM = SCINTILL.
Example:
* Request production of Cherenkov photons with energies between 2 and 3 eV in
* materials 16, 17, 19 and 20, with wavelengths between 300 and 600 nm in
* materials 18, 20 and 22, and with frequencies between 0.5 and 1 million GHz
* in material 21
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
OPT-PROD 2.E-9 3.E-9 0.0 16.0 17.0 0. CERENKOV
OPT-PROD 2.E-9 3.E-9 0.0 19.0 20.0 0. CERENKOV
OPT-PROD 3.E-5 6.E-5 0.0 18.0 22.0 2. CEREN-WV
OPT-PROD 3.14E15 6.28E15 0.0 21.0 0.0 0. CEREN-OM
* Optical photon transport requested between 300 and 500 nm for all materials
* with number between 16 and 21
OPT-PROP 3.E-5 5.E-5 6.E-5 16.0 22.0 0. WV-LIMIT
* User routine USRMED called when an optical photon is going to be transported
* in materials 17 and 21
MAT-PROP 1.0 0.0 0.0 17. 21. 4. USERDIRE
7.51 OPT–PROP
For SDUM = WV–LIMIT: defines wavelength range for optical photon transport
WHAT(1) > 0.0: minimum wavelength (in cm) for optical photon transport
= 0.0: ignored
< 0.0: resets to default
Default = 2.5 × 10−5 (250 nm)
WHAT(2) > 0.0: central wavelength (in cm) for optical photon transport
= 0.0: ignored
< 0.0: resets to default
Default = 5.89 × 10−5 (589 nm, Na D line)
WHAT(3) > 0.0: maximum wavelength (in cm) for optical photon transport
= 0.0: ignored
< 0.0: resets to default
Default = 6.0 × 10−5 (600 nm)
SDUM = WV–LIMIT
For SDUM = OM–LIMIT: defines angular frequency range for optical photon transport
WHAT(1) > 0.0: minimum angular frequency for optical photon transport ω = 2πν in rad/s
(ν = frequency)
= 0.0: ignored
< 0.0: resets to default
Default = 3.14 × 1015 rad/s (corresponding to 600 nm)
WHAT(2) > 0.0: central angular frequency for optical photon transport ω = 2πν in rad/s
= 0.0: ignored
< 0.0: resets to default
Default = 3.20 × 1015 rad/s (corresponding to 589 nm, Na D line)
WHAT(3) > 0.0: maximum angular frequency for optical photon transport ω = 2πν in rad/s
= 0.0: ignored
< 0.0: resets to default
Default = 7.53 × 1015 rad/s (corresponding to 250 nm)
SDUM = OM–LIMIT
SDUM = RESET
WHAT(3) = 3rd optical property: (1 − r), where r is the reflectivity index at the central wavelength (or
at the central angular frequency, depending on which one of the two quantities has been
defined). See also Note 2
Default = 0.0
SDUM = METAL
WHAT(1) = 1st optical property: refraction index nref r at the central wavelength (or at the central
angular frequency, depending on which one of the two quantities has been defined)
< -99: forces to use user routine RFRNDX (see Note 1)
Default = 1.0
WHAT(2) = 2nd optical property: absorption coefficient µabs (in cm−1 ) at the central wavelength (or
at the central angular frequency, depending on which one of the two quantities has been
defined)
< -99: forces to use a user routine ABSCFF (see Note 1)
Default = 0.0
WHAT(3) = 3rd optical property: diffusion coefficient µdif f (in cm−1 ) at the central wavelength (or at the
central angular frequency, depending on which one of the two quantities has been defined)
< -99: forces to use a user routine DFFCFF (see Note 1)
Default = 0.0
SDUM = blank
WHAT(1) = 4th (resp. 7th ) (resp. 10th ) optical property of the material (derivatives of the refraction
index, see Note 2)
Default = 0.0
WHAT(2) = 5th (resp. 8th ) (resp. 11th ) optical property of the material (derivatives of the absorption
coefficient, see Note 2)
Default = 0.0
WHAT(3) = 6th (resp. 9th ) (resp. 12th ) optical property of the material (derivatives of the diffusion
coefficient, see Note 2)
WHAT(4) – WHAT(6): assignment to materials, see below
SDUM = &1, &2 or &3 in any position in column 71 to 78 (or in the last field if free format is used)
Input Commands 189
WHAT(4) = lower bound of the indices of materials to which the indicated optical properties refer
(“From material WHAT(4). . . ”)
Default = 3.0
WHAT(5) = upper bound of the indices of materials to which the indicated optical properties refer
(“. . . to material WHAT(5). . . ”)
Default = WHAT(4)
For SDUM = SENSITIV: sets up the optical photon detection sensitivity parameters
(See also SDUM = WV–SENSI, SDUM = OM–SENSI, Note 3 and the examples in 12.2)
For SDUM = WV–SENSI: sets up the wavelength of the optical photon sensitivity
WHAT(1) > 0.0: minimum wavelength (in cm) for optical photon sensitivity
= 0.0: ignored
< 0.0: resets to default
Default = 2.5 × 10−5 (250 nm)
WHAT(2) > 0.0: central wavelength (in cm) for optical photon sensitivity
= 0.0: ignored
< 0.0: resets to default
Default = 5.89 × 10−5 (589 nm, Na D line)
WHAT(3) > 0.0: maximum wavelength (in cm) for optical photon sensitivity
= 0.0: ignored
190 OPT–PROP
For SDUM = OM–SENSI: sets up the angular frequency of the optical photon sensitivity
WHAT(1) > 0.0: minimum angular frequency for optical photon sensitivity ω = 2πν in rad/s
(ν = frequency)
= 0.0: ignored
WHAT(2) > 0.0: central angular frequency for optical photon sensitivity ω = 2πν in rad/s
= 0.0: ignored
WHAT(3) > 0.0: maximum angular frequency for optical photon sensitivity ω = 2πν in rad/s
SDUM = OM–SENSI
For SDUM = SPEC–BDX: flags special boundary crossings for optical photons
At the selected boundary crossings special user-defined properties are defined by means of the user routine
OPHBDX (see Note 1).
A maximum of 40 boundaries can be flagged by issuing option OPT–PROP with SDUM = SPEC–BDX as many
times as needed.
WHAT(1) ≥ 1.0: special boundary treatment activated for the nth +1 boundary
= 0.0: ignored
WHAT(4) ≥ 1.0: special boundary treatment activated for the nth +2 boundary
= 0.0: ignored
Input Commands 191
SDUM = SPEC–BDX
Notes
2 3
d
The optical photon sensitivity parameters are: (0), , d , d
d x d x2 d x3
where x is:
λ − λcentral ω − ωcentral
x= or x=
λcentral ωcentral
Input Commands 193
Example 1:
* Optical photon transport requested between 3.E15 and 7.E15 rad/s
* (4.77E5 and 1.11E6 GHz, or 314 to 628 nm) for materials 6,9,12,15 and 18
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
OPT-PROP 3.E15 6.E15 7.E15 6.0 18.0 3. OM-LIMIT
* User routine USRMED called when an optical photon is going to be transported
* in materials 6, 12 and 18
MAT-PROP 1.0 0.0 0.0 6.0 18.0 6. USERDIRE
Example 2:
* Material 11 has a reflectivity index = 0.32
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
OPT-PROP 0.0 0.0 0.32 11.0 0.0 0. METAL
Example 3:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
* Optical photon transport requested between 300 and 600 nm for water.
* (material 9). The optical properties are for the Na D line (589 nm)
MATERIAL 1.0 0.0 .0000899 3.0 0.0 1. HYDROGEN
MATERIAL 8.0 0.0 0.00143 5.0 0.0 0. OXYGEN
MATERIAL 0.0 0.0 1.0 21.0 0.0 0. WATER
COMPOUND 2.0 3.0 1.0 5.0 0.0 0. WATER
OPT-PROP 3.E-5 5.89E-5 6.E-5 21.0 0.0 0. WV-LIMIT
* diffusion coefficient of water from Appl.Opt. 17, 3587 (1978)
OPT-PROP 1.33299 0.0013 1.22E-5 21.0 0.0 0.
194 PAIRBREM
7.52 PAIRBREM
Controls simulation of pair production and bremsstrahlung by high-energy
muons, charged hadrons and light ions (up to α’s)
Notes
1. Initialisation of bremsstrahlung and pair production by mouns, charged particles and light ions (up to α’s) is
very demanding in computer time. On the other hand, these effects must be taken into account for a correct
simulation at high energies. It is suggested to inhibit them in the phase of input preparation and debugging,
but to activate them again in production runs (in long runs the time taken by initialisation is of course a smaller
fraction of total time). In pure electron-photon problems and in low-energy hadron problems, the effects should
be inhibited (WHAT(3) = -3.0)
2. When setting a threshold for pair and bremsstrahlung production by muons, charged hadrons and light ions
(up to α’s), the following considerations should be taken into account:
– photon production threshold (WHAT(3)) must of course not be lower than the photon transport cutoff as
set by EMFCUT (p. 113) or by the chosen default. (In general it will be reasonable to set it equal to it)
– on the contrary, the electron and positron production threshold (WHAT(2)) should in general be set =
0.0, whatever the electron transport cutoff, unless photon transport cutoff is set higher than 511 keV. In
this way, if the positron is produced with an energy lower than electron transport cutoff it will be forced
to annihilate at the point of production, but the two 511 keV annihilation photons will be generated
correctly.
3. If option PAIRBREM is not activated, by default Fluka treats both bremsstrahlung and pair production by
muons and charged hadrons as continuous energy losses (i.e., without generating secondaries and depositing
their energy at the point of production). This will reproduce correctly the average ranges but not the straggling
and the dose distributions. A similar approach is found in A. Van Ginneken’s code Casim [148, 203, 204].
4. Virtual photonuclear interactions by high-energy muons are controlled by option MUPHOTON (p. 178).
5. Here are the settings for pair and bremsstrahlung production by high-energy muons and charged hadrons,
corresponding to available DEFAULTS options:
CALORIMEtry, ICARUS, PRECISIOn:
pair production is activated in all materials, with explicit generation of secondaries of any energy;
bremsstrahlung is also activated in all materials, with explicit generation of photons having energy
≥ 300 keV.
NEW–DEFAults, or DEFAULTS missing:
pair production is activated in all materials, with explicit generation of secondaries of any energy;
bremsstrahlung is also activated in all materials, with explicit generation of photons having energy
≥ 1 MeV.
Any other SDUM value:
both pair and bremsstrahlung production are activated in all materials, without explicit generation of
secondaries (continuous loss approximation) .
Example 1:
* Explicit pair production and bremsstrahlung requested for muons and charged
* hadrons in materials 4, 7 and 10, provided the energy of the secondary
* electrons and positrons is > 1.2 MeV. No threshold is requested for photon
* production. For muons, explicit nuclear interactions are also requested.
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
PAIRBREM 3.0 0.0 0.0012 4.0 10.0 3.0
MUPHOTON 1.0 0.0 0.0 4.0 10.0 3.0
Example 2:
* Energy loss due pair production and bremsstrahlung by muons and charged
* hadrons accounted for in materials 6, and 7, without explicit generation
* of secondaries
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
PAIRBREM 3.0 -1.0 -1.0 6.0 7.0
196 PART-THRes
7.53 PART–THRes
Sets different energy transport cutoffs for hadrons, muons and neutrinos.
Default (option PART–THRes not given): thresholds as described above for WHAT(1) = 0.0.
Input Commands 197
Notes
1. If low-energy neutron transport is not requested (explicitly via LOW–NEUT or implicitly via DEFAULTS), the
energy of neutrons below 20 MeV is deposited on the spot.
2. The total momentum cutoffs of heavy ions are derived from that of a 4 He ion (4–HELIUM) by scaling the latter
with the ratios of the atomic weights of the heavy ions and the 4 He ion. The total momentum cutoffs for light
ions (4–HELIUM, 3–HELIUM, TRITON and DEUTERON) can be defined by PART–THRes. If this is not done,
they are derived from that of a proton by scaling the latter with the ratios of the atomic weights of the light
ions and a proton.
3. Option PART–THR acts on all particles except e+ e− and photons; when using the EMF option (p. 107) to
transport electrons, positrons and photons, the transport cutoff energy is governed by EMFCUT (p. 113).
4. When the energy of a charged particle becomes lower than the cutoff defined by PART–THR, and if such cutoff
is lower than 100 MeV, the particle is not stopped, but is ranged out to rest in an approximate way. Its kinetic
energy is deposited uniformly over the residual range if the latter is contained within a single region; otherwise
a new residual range is calculated at each boundary crossing and the residual kinetic energy is distributed
accordingly. If applicable, such a particle eventually decays at rest or is captured. All other forms of transport
are ignored except curved paths in magnetic fields (multiple scattering, delta ray production, inelastic or elastic
collisions, and including decay in flight). Magnetic fields are taken into account but only very roughly, since
the continuous slowing down of the particles is not simulated. Antiprotons and π − are always ranged out to
rest (without allowance for decay) and forced to annihilate on a nucleus.
5. If the cutoff is higher than 100 MeV, however, the particles are stopped in place without any further treatment.
If this happens at a boundary crossing where the material of the region entered is vacuum, a printed message
warns the user that energy is being deposited in vacuum.
6. By default the neutron threshold is set at 10−14 GeV (10−5 eV, the lowest boundary of the group structure).
So, normally it is not necessary to issue a PART–THR command at all for neutrons. A note of caution: if a
PART-THR has been issued spanning all particles, it is generally necessary to override it with another one
resetting the threshold for neutrons to 10−14 GeV. As a general rule however, if a neutron transport threshold
is set < 20 MeV, it is rounded to the closest lower group boundary.
Example:
* A threshold of 2 MeV (kinetic energy) is requested for heavy charged
* particles with id-numbers between 1 and 11 (protons, antiprotons and
* muons). A threshold of Gamma (= E/m) = 2 will apply for pions and kaons
* (numbers from 13 to 16). For all other particles, the defaults will apply.
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
PART-THR -0.002 1.0 11.0 0.0 0.0 1.0
PART-THR -2.0 13.0 16.0 1.0 1.0 0.0
198 PHOTONUC
7.54 PHOTONUC
Activates gamma, electron and positron interactions with nuclei.
WHAT(4) = lower index bound (or corresponding name) of materials where the indicated photonuclear
interactions are activated
(“From material WHAT(4). . . ”)
Default = 3.0
WHAT(5) = upper index bound (or corresponding name) of materials where the indicated photonuclear
interactions are activated
(“. . . to particle WHAT(5). . . ”)
Default = WHAT(4)
SDUM : blank
WHAT(4) = lower index bound of materials (or corresponding name) where the indicated electronuclear
interactions are activated
(“From material WHAT(4). . . ”)
Default = 3.0
WHAT(5) = upper index bound of materials (or corresponding name) where the indicated electronuclear
interactions are activated
(“. . . to particle WHAT(5). . . ”)
Default = WHAT(4)
SDUM = ELECTNUC
WHAT(4) = lower bound of the indices of materials where the indicated photomuon production mecha-
nisms are activated
(“From material WHAT(4). . . ”)
200 PHOTONUC
Default = 3.0
WHAT(5) = upper bound of the indices of materials where the indicated photomuon production mecha-
nisms are activated
(“. . . to particle WHAT(5). . . ”)
Default = WHAT(4)
Default (option PHOTONUC not given): photon or electron interactions with nuclei as well as photomuon
production are not simulated
Notes
1. Muon photonuclear interactions (via virtual photons) are not handled by PHOTONUC but by MUPHOTON
(p. 178).
2. Because photonuclear and electronuclear cross sections are much smaller than photon cross sections for elec-
tromagnetic interactions with atoms and electrons, analogue simulations of photonuclear and electronuclear
interactions are very inefficient. Generally, it is recommended to use PHOTONUC in combination with LAM–
BIAS (p. 150) to increase artificially the frequency of photonuclear and electronuclear interactions. See Notes 9
and 10 to option LAM–BIAS for more details.
3. Also photomuon production cross sections are much smaller than photon cross sections for electromagnetic
interactions with atoms and electrons, as discussed in Note 2. But in this case, the artificial increase of inter-
actions can be performed directly with the PHOTONUC command (see WHAT(2) with SDUM = MUMUPAIR
or MUMUPRIM)
Input Commands 201
Example 1:
* Giant Resonance and Quasi-Deuteron photonuclear interactions are requested
* in material 18. The photon hadronic interaction length is artificially
* shortened by a factor 0.02 in order to improve statistics
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
PHOTONUC 1100.0 0.0 0.0 18.0 0.0 0.0
LAM-BIAS 0.0 0.02 18.0 7.0 0.0 0.0
Example 2:
* Photonuclear interactions are requested at all energies in materials
* 3, 7, 11 and 15. The photon hadronic interaction length is shortened
* by a factor 0.025
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
PHOTONUC 1.0 0.0 0.0 3.0 15.0 4.0
LAM-BIAS 0.0 0.025 0.0 7.0 0.0 0.0
202 PHYSICS
7.55 PHYSICS
Allows to override the standard Fluka defaults for physics processes.
WHAT(4) = lower bound of the particle id-numbers (or corresponding names) to which the decay flag
chosen by WHAT(1) applies
(“From particle WHAT(4). . . ”)
Default = 1.0
WHAT(5) = upper bound of the particle id-numbers (or corresponding names) to which the decay flag
chosen by WHAT(1) applies
(“. . . to particle WHAT(5). . . ”)
Default = WHAT(4)
Default = 20TeV
Default = 5GeV/n
Default = 0.125GeV/n
WHAT(5) = smearing (±∆E, GeV/n) for the Fluka-Dpmjet switch energy for h-A interactions
< 0.0: resets to default (10 TeV)
204 PHYSICS
Default : 10 TeV
WHAT(6) = flag for restricting Dpmjet h–A interactions to primary particles only
≤ -1.0: resets to default (false)
= 0.0: ignored
> 0.0: sets to true
Default (no PHYSICS option with SDUM = DPMTHREShold): Dpmjet is called for h–A interactions
above 20 TeV and for A–A interactions down to 5 GeV/n. Rqmd is called between 5 and
0.125 GeV/n.
Warning: to activate ion interactions refer to the IONTRANS card.
Warning: The Fluka executable must be built with the Dpmjet and Rqmd libraries to
perform A–A interactions above 125 MeV/n (see the ldpmqmd script in $FLUPRO/flutil).
Dpmjet must also be linked for h-A interactions above 20 TeV
= 0.0: ignored
Input Commands 205
≥ 0.0: activated (it still requires heavy pair production activated via PAIRBREM for the
required materials)
Default = 1.0 (heavy ion direct pair production is activated in the materials defined by
PAIRBREM)
WHAT(2) : flag for (de)activating heavy ion bremsstrahlung (not yet implemented)
WHAT(3) : flag for (de)activating nuclear form factor effects in heavy ion delta ray production
= 0.0: ignored
≥ 0.0: nuclear form factor effects are activated (it still needs delta ray production activated
via DELTARAY for the required materials)
Default = 1.0 (nuclear form factor effects in heavy ion delta ray production are activated
in the materials defined by DELTARAY)
WHAT(4) – WHAT(6): not used
= 0.0: ignored
WHAT(2) = minimum energy for ions (GeV/n) above which splitting into nucleons will be performed
≤ 0.0: ignored
Default = 0.1GeV/n
WHAT(3) = maximum energy for ions (GeV/n) below which splitting into nucleons will be performed
≤ 0.0: ignored
Default = 5GeV/n
WHAT(1) : set the maximum (pp) CMS momentum (used for initialization of high energy models, typi-
cally Dpmjet)
≤ 0.0: reset to default
= 0.0: ignored
206 PHYSICS
WHAT(4) : flag for activating charm production (CHA) in DIS neutrino interactions
= 1.0: CHA neutral current (NC) activated (not yet implemented)
= 2.0: CHA charged current (CC) activated
= 3.0: CHA NC and CC activated (NC not yet implemented)
< 0.0: no CHA interactions
= 0.0: ignored
Default = 3.0 (CHA CC activated, but NC not yet implemented)
Default (option PHYSICS not given): standard Fluka treatment of physics processes
≤ 0.0: ignored
≤ 0.0: ignored
Default = 0.0
Default = 0.0
= 0.0: ignored
= 0.0: ignored
Note
1. In order to achieve accurate results for residual nuclei production or fragment production with ion beams
the evaporation of heavy fragments must be activated. This, however, is not the default since it can bring a
significant CPU burden, and is not needed for most applications. The CPU burden is maximal for problems
with heavy targets, high energy beams, and no electro-magnetic particle transport. It is often negligible for
problems with electro-magnetic transport activated down to low thresholds.
Example:
* Only hadronic decays are allowed for tau+ and tau- (id-number 41 and 42)
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
PHYSICS 201.0 0.0 0.0 41.0 42.0 0. DECAYS
* Maximum accuracy requested for decay of pi+ and pi-(id-number 13 and 14),
* but without accounting for polarisation
* Phase space
PHYSICS 2.0 0.0 0.0 13.0 14.0 0. DECAYS
* New evaporation model requested
PHYSICS 2.0 0.0 0.0 0.0 0.0 0. EVAPORAT
Input Commands 209
7.56 PLOTGEOM
Calls the Plotgeom geometry plotting package [112], to scan slices of the
problem geometry and to produce auxiliary files for plotting them, possibly
with a magnetic feld superimposed
Notes
1. The PLOTGEOM codeword links to Fluka the Plotgeom program, which was written by R. Jaarsma and
H. Rief of the Ispra Joint Nuclear Research Centre, and was adapted as a stand-alone program for the Fluka
Combinatorial Geometry by G.R. Stevenson of CERN. The present version, integrated with the dynamically
allocated storage of the Fluka code as an input option, has been improved from several points of view, mainly
higher flexibility and smaller space requirements.
The following documentation is extracted with some modifications from the original EURATOM report of
Jaarsma and Rief [112].
Plotgeom is a program for checking the geometry input data of Monte Carlo particle transport codes. From
the points of intersection between the geometrical structure and a mesh of lines of flight, Plotgeom generates
a picture, representing a cross section of the geometry.
The user specifies a two-dimensional cross section of the geometry by giving its orientation with respect to the
geometry coordinate axes and the desired coordinates of the corners of the picture (note that the x-y coordinate
system of the “picture” is usually different from the one to which the geometry description refers).
The program generates a horizontal grid of lines (parallel to the x-axis of the picture), covering the area of
the desired picture. The constant distance between adjacent lines is 0.07 cm in the picture. The points of
intersection with the medium boundaries are recorded. After having scanned one line, each intersection point
P2j found is compared with each similar point P1k found on the line immediately preceding. If the distance
210 PLOTGEOM
between a couple of points P1k , P2j is ≤ 0.035 cm, then the linepiece P1k –P2j is called a segment. If more
than one of the points P2 on the current line satisfies the quoted condition for a given P1, then only the nearest
one to that P1 is taken.
Now we define a “worm body” as being one segment or a string of segments, so that the endpoint of one
segment is the begin point of the next segment.
If a worm body with a last point P1j already exists, the segment P1j –P2k is connected to this worm body
and the last point of that worm body becomes P2k . Otherwise, the segment P1j –P2k is the first one of a new
worm body and the program looks for a “worm head” to be connected to P1j . This “head” has to be in the
neighbourhood of P1j between the two last scanned lines and is found by the subroutine HEADTL (PGMSCN in
the Fluka version), which applies the same principle for finding segments, but on a refined and variable grid.
If there is a worm body with a last point P1j and if on examining all P2 points no segment P1j –P2k is found,
then this body should be given a “tail”. This tail is determined by the subroutine HEADTL (PGMSCN) in the same
way as a head.
The “worms” (head, body, tail) thus created are stored on disk.
If the horizontal scanning has been finished the same procedure is repeated in the vertical direction (parallel
to the y-axis of the picture).
Finally the worms are concatenated as far as possible: head to tail, head to head, tail to tail. The strings of
worms formed in this way are plotted by means of any available graphics program (e.g., Paw).
2. A PLOTGEOM card can be issued only after the combinatorial geometry input has been read (either from an
external file or from standard input). In other words, PLOTGEOM cannot be input before the GEOEND card.
In addition, if WHAT(2) is different from 0.0, PLOTGEOM can be invoked only after materials have already
been assigned.
3. Since Plotgeom now makes use of the same dynamically allocated blank common storage of Fluka it is
convenient to issue the PLOTGEOM card just after geometry and material definitions, but before biasing and
any other option which makes use of permanent and/or temporary storage in blank common. The purpose is
twofold:
(a) this maximises the storage available for Plotgeom for a given blank common dimension, and hence
minimises the chances of having a too small storage
(b) since Plotgeom frees again all the used storage after completion, the total memory required for the
blank common is minimised
On the other hand, if the LATTICE geometry option is used, the PLOTGEOM command must be issued only
after all the transformations have been defined (i.e., after all ROT-DEFIni commands).
4. The input data required by Plotgeom to perform a slice scan must be given on the unit specified by WHAT(6)
as follows:
First line (format A80) : scan title
Second line (format 6E10.5): X0, Y0, Z0, X1, Y1, Z1
Third line (format 6E10.5): TYX, TYY, TYZ, TXX, TXY, TXZ
Fourth line (format 4E10.5): EXPANY, EXPANX, PFIMXX, PFIMXY
The meaning of the variables is:
X0, Y0, Z0 = real coordinates of the bottom left-hand corner of the picture
X1, Y1, Z1 = real coordinates of the top right-hand corner of the picture
TYX, TYY, TYZ = direction cosines of the y-axis of the plot
TXX, TXY, TXZ = direction cosines of the x-axis of the plot
EXPANY, EXPANX = expansion factors ≥ 0.1 for the y-axis, resp. the x-axis
PFIMXX, PFIMXY: if > 0, number of intervals along the x- and y-axis for plotting strength and direction of
a magnetic field returned by the user routine MAGFLD
There is some redundancy in the position and direction input: indeed once X0,Y0,Z0,X1,Y1,Z1 are given, only
one of the axis is actually required. Therefore the three cosines of one of the two axes can be left = 0.0.
If < 0.1, EXPANX, EXPANY are reset to the default = 1.0. Only their relative value matters.
5. The scan output file is written (formatted or not according to the value of SDUM) on unit LUNPGS (= 4) with
the default name of PLOTGEOM.STORE.
The formatted version is self explanatory, while the unformatted one is organised as follows:
1st record: one CHARACTER*80 variable (title of the scan)
2nd record: 14 REAL*4 variables: X0,Y0,Z0, X1,Y1,Z1, TYX,TYY,TYZ, TXX,TXY,TXZ, XAXLEN,YAXLEN
Input Commands 211
where:
X0,Y0,Z0 and X1,Y1,Z1 = coordinates of the bottom and top corners (from 2nd input line above)
TYX,TYY,TYZ and TXX,TXY,TXZ = direction cosines of the axes (from 3rd input line above)
XAXLEN and YAXLEN = lengths of x and y axes
Then, repeated M times (M ≥ 2, typically M = 2):
In a compressed file there is just an extra record at the very beginning of the file: it contains a CHARACTER*80
string equal to ’***COMPRESSED***COMPRESSED***’
Example:
* Plot a vertical section of a geometry where x-axis points up, y-axis points
* to the right, and z-axis into the page. The PLOTGEOM file will be formatted.
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
PLOTGEOM 1.0 1.0 0.0 0.0 0.0 5. FORMAT
Vertical section of the tunnel geometry at z = 35 m
-120.0 -180.0 3500.0 120.0 180.0 3500.0
1.0 0.0 0.0 0.0 1.0 0.0
1.0 1.0 0.0
212 POLARIZAti
7.57 POLARIZAti
Defines the polarisation of a photon beam or source and activates transport
of polarised photons.
|WHAT(1)| ≤ 1.0: x-axis cosine of the beam polarisation vector (electric vector in case of photons)
> 1.0: resets the default (no polarisation)
This value can be overridden in user routine SOURCE (p. 363) by assigning a value to
variable UBMPOL
Default = -2.0 (no polarisation)
Default (option POLARIZAti not given): photons are not assumed to be polarised
Notes
1. Polarisation direction defined by option POLARIZAti is meaningful only if the beam direction is along the
positive z-axis, unless a command BEAMAXES is issued to establish a beam reference frame different from the
geometry frame (see p. 7.5).
2. The program takes care of properly normalising the cosines unless they are badly unnormalised (in the latter
case the code would reset to no polarisation). If WHAT(4) ≥ 1.0, the code makes sure that the two vectors
are orthogonal within the minimum possible rounding errors.
Input Commands 213
3. What polarisation means is dependent on the physics implemented in the code: for the moment the only
polarisation dependent effects are Compton, Rayleigh and photoelectric effect for photons, where of course the
polarisation vector represents the electric field direction and must be normal to the beam direction.
Example:
* Synchrotron radiation beam with m_e/E mrad x,y divergence (produced by a 3 GeV
* electron beam). The actual spectrum is provided by a a user-written source
* (E_max = 500 keV). Photons are fully polarised in the horizontal (y) direction
* and the polarisation is orthogonal to the direction of the primary photons
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
DEFAULTS EM-CASCA
BEAM -500.E-6 0.0 1.7033E-4 0.0 0.0 1.0PHOTON
SOURCE 0.0 0.0 0.0 0.0 0.0 0.0
POLARIZA 0.0 1.0 0.0 1.0 1.0 0.0
214 RADDECAY
7.58 RADDECAY
requests simulation of radioactive decays and sets the corresponding biasing
and transport conditions
WHAT(5) : multiplication factors to be applied to e± /γ transport energy cutoffs, respectively for prompt
and decay radiation
> 0.0: a 10-digit number xxxxxyyyyy, where the first and the last 5 digits are interpreted
as follows (see Note 5 below):
xxxxx × 0.1 = transport energy cutoff multiplication factor for β ± and γ decay radiation
yyyyy × 0.1 = transport energy cutoff multiplication factor for prompt e± and γ radiation
= 0.0: ignored
< 0.0: reset to default
Default : e± and γ transport energy cutoffs are unchanged: the multiplication factors are
set = 1.0 for both prompt and decay radiation (equivalent to 0001000010.)
WHAT(6) flag for generating β + /β − spectra with Coulomb and screening corrections
are then expressed per isotope decay. Note that command DCYSCORE must be issued with WHAT(1) = –1,
and must be applying to all relevant estimators and detectors. Without DCYSCORE, no scoring will occur (see
Note 8 to command BEAM).
Example:
* In this example, radioactive decays are activated for requested cooling
* times, with an approximated isomer production. Each radioactive nucleus
* produced will be duplicated.
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...
RADDECAY 1.0 1.0 2.0 111000. 200.
* Any biasing of electrons, positrons and photons is applied only to
* prompt particles in the electromagnetic shower, and not to beta and
* gamma particles from radioactive decay.
* The transport energy cutoffs set by EMFCUT (or by DEFAULTS) are
* applied as such to decay betas and gammas, but are multiplied by a
* factor 20 when applied to prompt particles.
Input Commands 217
7.59 RANDOMIZe
Sets the seeds for the double-precision random number generator RM64
Default (option RANDOMIZe not given): standard seeds are used as implemented
Notes
1. The random number generator can be now initialised in one way only, namely to read from an external file
(generated in a previous run) a vector of 97 seeds (the file contains also some auxiliary information). If that
file is missing or empty or anyway invalid, the code will initialise the random number generator in its default
state.
2. While the number of calls to the random number generator are printed on the standard output at the end of
each primary history (or group of histories — see WHAT(5) of option START, p. 231), the 97 seeds are printed
at the same time on a separate file. Skipping calls is therefore unnecessary in order to re-start a run at a
particular point (e.g., where it stopped because of a crash). However, it still possible to skip a given number
of calls by running the random number generator stand-alone.
3. It is mandatory to use only seeds output information as written by the program in earlier runs on the same
computer platform. Otherwise the randomness of the number sequence would not be guaranteed.
4. Flrn64 is a portable random number generator in double precision, written in Fortran by P. Sala. It is based
on an a new algorithm by Marsaglia and Tsang [125]. It gives random floating point numbers in the interval
[0,1), with 53 significant bits of mantissa.
5. Different numbers input as WHAT(2) will initialise different and independent random number sequences, al-
lowing the user to run several jobs in parallel for the same problem.
The default is 1234598765.
Example 1:
* The seeds for the random number generator will be read from the file connected
* with logical unit 1
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
RANDOMIZE 1.0 0.0 0.0 0.0 0.0 0.0 0.0
Example 2:
* This run will be completely independent statistically from the previous one
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
RANDOMIZE 1.0 4042731. 0.0 0.0 0.0 0.0 0.0
218 RESNUCLEi
7.60 RESNUCLEi
Scores stopping nuclei on a region basis.
WHAT(4) = maximum M = N - Z - (NMZ)min of the residual nuclei distribution (see Notes 2 and 3)
Default : according to the mass and atomic number A, Z of the element(s) of the material
assigned to the scoring region
WHAT(5) = scoring region number or name
= -1.0: all regions (see Note 9)
Default = 1.0
WHAT(6) = volume of the region in cm3 (or, more in general, a normalization factor by which the scoring
shall be divided)
Default = 1.0
Notes
1. Elements or isotopes for which the Fluka low-energy neutron cross sections contain information on the pro-
duction of residual nuclei are indicated by “Yes” in column 5 (“Residual nuclei”) of Table 10.3 (p. 325) where
the components of the neutron cross section library are listed.
The same information can be obtained by setting the printing flag in the LOW–NEUT option (WHAT(4) > 0.0).
If such data are available for a given nuclide, the following message is printed on standard output:
(RESIDUAL NUCLEI INFORMATIONS AVAILABLE)
2. To minimise storage, nuclei are indexed by Z (with Zmin = 1) and NMZ = N - Z (with (NMZ)min = -5). The
parameter M is defined as M = NMZ - (NMZ)min : therefore Mmin = 1. The following relations can also be useful:
N - Z = M + (NMZ)min N = M + Z + (NMZ)min
3. In the case of heavy ion projectiles the default NMZ, based on the region material, is not necessarily sufficient
to score all the residual nuclei, which could include possible ion fragments.
4. In order to achieve reasonable results for residual nuclei production the new evaporation module must be
activated (it is currently the default) and heavy fragment evaporation should also be activated (it is not the
default beacuse of the related large CPU penalty). Coalescence should also be activated (see option PHYSICS,
p. 202 for all these settings). The old evaporation is still available, mostly because of historical reasons, but
Input Commands 219
it does not produce meaningful results for residuals. The new evaporation, available since 1997, is far more
sophisticated in this respect, while differences in emitted particle spectra and multiplicities are weak.
5. Starting with Fluka2006.3 protons are scored together with 2 H, 3 H, 3 He, 4 He, at the end of their path, if
transported (see option IONTRANS, p. 148). This is a change with respect to previous versions where protons
were not scored.
6. All residual nuclei are scored when they have been fully de-excited down to their ground or isomeric state.
7. Radioactive decay of residual nuclei can be performed by Fluka in the same run: see commands DCYTIMES
(p. 91), DCYSCORE (p. 89), IRRPROFIle (p. 149) and RADDECAY (p. 214), or can be done off-line by a
user-written code (see for instance the program Usrsuwev available with the normal Fluka distribution). If
command IRRPROFI has been issued, RESNUCLEi results provided by detectors associated to a cooling time
index via DCYSCORE will be expressed in Bq.
8. An example on how to read RESNUCLEi unformatted output is shown below. An explanation of the meaning
of the different variables is given in the comments at the beginning of the program. The program lists the Z
and A of the produced nuclei, followed by the corresponding amount per unit volume.
A more complex program Usrsuw, which allows to compute also standard deviations over several runs, is
available with the normal Fluka code distribution in directory $FLUPRO/flutil.
A special version of the same program, Usrsuwev, provides in addition a calculation of induced activity and
of its evolution in time.
9. Setting WHAT(5) = -1 will provide the sum of the residual nuclei in all regions, divided by the value set by
the user for WHAT(6).
PROGRAM RDRESN
*---------------------------------------------------------------------*
* Up to MXRSNC user-defined track or coll are allowed *
* izrhgh = maximum Z of the scoring (minimum Z: 1) *
* imrhgh = maximum M=N-Z-NMZ_min of the scoring *
* (minimum M: 1). Note: *
* N-Z = M + NMZ_min, N = M + Z + NMZ_min *
* itursn = type of binning: 1 = spallation products, *
* 2 = low-energy neutrons products, *
* 3 = all products *
* nrursn = region *
* vursnc = volume (cm**3) of the detector *
* tiursn = scoring name *
*---------------------------------------------------------------------*
PARAMETER ( MXRSNC = 400 )
CHARACTER*10 TIURSN
CHARACTER RUNTIT*80, RUNTIM*32, FILNAM*80
Example:
* Calculate residual nuclei produced in an iron slab (region 6) and in a zinc
* vessel (region 10). Heavy recoils are transported (option IONTRANS) and scored
* at the point where they stop. The new evaporation model is activated to ensure
* a better quality of the results. For iron, all residual nuclei are scored. For
* zinc, no data are available for low-energy neutrons, so only nuclei produced
* by spallation/evaporation are scored. Results are written (formatted) on
* logical unit 22 and 23, respectively.
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
MATERIAL 26.0 0.0 7.87 11. 0.0 0. IRON
MATERIAL 30.0 0.0 7.133 12. 0.0 0. ZINC
ASSIGNMAT 11.0 6.0 9.0 0.0 ! Four Fe slabs
ASSIGNMAT 12.0 10.0 0.0 0.0 ! Zn vessel
IONTRANS -2.0
PHYSICS 2.0 0.0 0.0 0.0 0.0 0. EVAPORAT
RESNUCLEI 3.0 22.0 0.0 0.0 6.0 0. FirstFe
RESNUCLEI 1.0 23.0 0.0 0.0 10.0 0. Znvessel
Input Commands 221
7.61 ROT–DEFIni
Defines rotations and translations to be applied to binnings.
See also EVENTBIN, ROTPRBIN, USRBIN, and LATTICE (in Chap. 8, p. 297)
Notes
1. Fluka binnings (spatial meshes independent of the problem geometry, designed to score average or event-by-
event quantities) are generally defined as Cartesian structures parallel to the coordinate axes, or as cylindrical
structures parallel to the z-axis. However, it is possible to define binnings with any arbitrary position and
direction in space, by means of transformations described by commands ROT–DEFIni and ROTPRBIN.
Command ROT–DEFIni defines rotations/translations to be applied to binnings (requested by the user by
means of EVENTBIN (p. 124) or USRBIN) (p. 249). Each transformation defined by ROT–DEFIni is assigned a
number WHAT(1) which can be applied to one or more binnings. The correspondence between transformation
index and binning number is assigned via option ROTPRBIN (p. 224).
2. Command ROT–DEFIni can be used also to define roto-translations to be applied to lattice cells. Command
LATTICE (see description in 8.2.10) sets the correspondence between transformation index and lattice cell.
222 ROT–DEFIni
3. Command ROT–DEFIni can be used also to define roto-translations to be applied to bodies in the geometry,
as requested by the $Start_transform.....$End_transform directive (see 8.2.5.3).
4. The transformation matrices are:
j = 1:
Xnew cos θ sin θ 0 1 0 0 Xold + Xoffset
Ynew = − sin θ cos θ 0 0 cos φ sin φ Yold + Yoffset
Znew 0 0 1 0 − sin φ cos φ Zold + Zoffset
j = 2:
Xnew 1 0 0 cos φ 0 − sin φ Xold + Xoffset
Ynew = 0 cos θ sin θ 0 1 0 Yold + Yoffset
Znew 0 − sin θ cos θ sin φ 0 cos φ Zold + Zoffset
j = 3:
Xnew cos θ 0 − sin θ cos φ sin φ 0 Xold + Xoffset
Ynew = 0 1 0 − sin φ cos φ 0 Yold + Yoffset
Znew sin θ 0 cos θ 0 0 1 Zold + Zoffset
Rij = Tik Pkj :
θ = π/2, φ = 0:
j = 1: j = 2: j = 3:
x0 = y x0 = x x0 = −z
0 0
y = −x y = z y0 = y
0 0 0
z = z z = −y z = x
θ = 0, φ = π/2:
j = 1: j = 2: j = 3:
x0 = x x0 = −z x0 = y
0 0 0
y = z y = y y = −x
z0 = −y z0 = x z0 = z
That is, the vector which has position angles θ and φ with respect to the jth axis in the original system, will
become the jth axis in the rotated system. For the special case θ = 0 this implies a rotation of −φ in the
original frame. In practice it is more convenient to think about the inverse rotation, the one which takes the
jth versor into the versor with θ and φ.
6. Note that a transformation can be defined recursively, for example with two cards pointing to the same
transformation. If Pij is the rotation corresponding to the first card and Tij the one corresponding to the
second card, the overall rotation will be Rij = Tik Pkj
7.62 ROTPRBIN
Sets the amount of storage and the storage precision for binnings (single or
double) .
Sets also the correspondence between rotations/translations and binnings.
WHAT(2) : index (or name) of the rotation/translation matrix (defined by a ROT-DEFIni card) which is
associated to the binning(s) indicated by WHAT(4). . . WHAT(6) (see Note 4)
≤ -1.0: resets the associated rotation/translation to identity (transformation index = 0)
WHAT(4) = lower index bound of binnings in which the requested storage precision and/or transformation
must be applied
(“From binning WHAT(4). . . ”)
Default = 1.0
WHAT(5) = upper index bound of binnings in which the requested storage precision and/or transforma-
tion must be applied
(“. . . to binning WHAT(5). . . ”)
Default = WHAT(4)
Default (option ROTPRBIN not given): binning data are stored in double precision, and no rota-
tion/translation is applied
Notes
1. Command ROTPRBIN can be used for three different tasks, all related to binnings requested by the user by
means of EVENTBIN (p. 124) or USRBIN (p. 249):
(a) to define the precision used at run-time to store the accumulated scores of selected binnings
(b) to allocate a reduced amount of memory as a storage for selected binnings
(c) to set the correspondence between the index of a transformation (rotation/translation as defined by
command ROT–DEFIni, see p. 224) and the index of selected binnings.
2. The USRBIN/EVENTBIN output values are always in single precision, regardless of the run-time storage pre-
cision used. Run-time storage precision, which is double by default, should never be changed for binnings
defined by USRBIN to prevent severe loss of data (adding small amounts in the rounding can result in values no
longer increasing with the number of primary particles). However, this is unlikely to happen with EVENTBIN
binnings, which are reset at the end of each history.
Input Commands 225
3. In many cases, binnings defined by EVENTBIN result in a number of sparse “hit” cells, while all other bins are
empty (scoring zero). In such cases, it is convenient to allocate less storage than required by the whole binning
structure. See also Note 2 to option EVENTBIN, p. 124.
4. Binning space transformations (rotations and translations) are those defined by a ROT–DEFIni card. That is,
the variables used for scoring are the primed one (x0 , y 0 , z 0 ) (see Notes 4 and 5 to option ROT–DEFIni).
Example 1:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
ROTPRBIN 85500. 0.0 0.0 2.0 6.0 2.0
* Allocate only 85.5% of the memory normally required for binnings 2, 4 and 6
Example 2:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
ROTPRBIN 10001. 0.0 0.0 3.0 5.0 0.0
* Set single storage precision for binnings 3, 4 and 5
Example 3:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
ROTPRBIN myMatrix energyA HEhadrA
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
ROT-DEFIni 44. 0. 0. 0. 0. -16000.myMatrix
ROT-DEFIni 244. 0. 10. 0. 0. 0.myMatrix
ROT-DEFIni 44. 0. 0. 0. 10000. 10000.myMatrix
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
USRBIN 11. ENERGY -21. 1. 5. 50. energyA
USRBIN -1. -1. -5. 10. 10. 50. &
USRBIN 11. DOSE -21. 1. 5. 50. doseA
USRBIN -1. -1. -5. 10. 10. 50. &
USRBIN 11. HADGT20M -21. 1. 5. 50. HEhadrA
USRBIN -1. -1. -5. 10. 10. 50. &
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
* Associate the "myMatrix" ROT-DEFIni to the "energyA", "doseA" and "HEhadrA"
* USRBINS
226 SCORE
7.63 SCORE
Defines the (generalised) particles producing the stars to be scored in each
region.
Requests scoring of energy deposition in each region.
Depending on the (generalised) particle type, different quantities are scored in each region:
– For hadrons, photons, muons: stars (see Notes 3, 4a)
– For generalised particles 208.0 and 211.0: energy deposition (Notes 4b, 4c).
– For generalised particles 219.0, 220.0 and 221.0: fissions (Note 4d).
– For generalised particle 222.0, neutron balance (Note 4e).
– For generalised particles 229.0 and 230.0: unbiased energy deposition (Note 4f).
= 0.0: no scoring per region of any of the quantities listed above
Default = 201.0, 0.0, 0.0, 0.0 (score stars produced by all particles)
Notes
1. The possible particle numbers are those listed in 5.1, i.e., -6.0 to 62.0 and 201.0 to 244.0. However, not
all particles will give meaningful results: for instance particles 3.0 and 4.0 (electrons and positrons) cannot
produce stars, fissions, etc.
Selecting generalised particles 208.0 (energy) or 211.0 (“electromagnetic” energy, i.e., energy of e+ e− and γ),
one can score deposited energy (proportional to dose).
2. SCORE is one of the oldest Fluka commands, which has been kept unchanged because of its simplicity and
easiness of use. On the other hand, because it lacks the flexible memory allocation of all other scoring options,
there is presently room for only 4 types of particles. Therefore, only the 4 first valid WHAT-parameters are
retained.
3. A star is a hadronic inelastic interaction occurring at an energy higher than a threshold defined via the option
THRESHOLd (or by default higher than the transport threshold of the interacting particle). Star scoring,
traditionally used in most high-energy shielding codes, can therefore be considered as a form of crude collision
estimator: multiplication of star density by the asymptotic value of the inelastic nuclear interaction length
gives the fluence of hadrons having energy higher than the current threshold. However, this is meaningful only
if the interaction length doesn’t vary appreciably with energy; therefore it is recommended to set a scoring
threshold = 50 MeV (using option THRESHOLd), since interaction lengths are practically constant above this
energy.
Besides, star densities calculated with a 50 MeV threshold are the basis of some established radiation protection
techniques such as the ω-factors for estimating material activation (see [200], p. 106), and the prediction of
single isotope yields from the ratio of partial to inelastic cross section. Note that such techniques can still
be used, although they have been made obsolete by more accurate modern Fluka capabilities (commands
DCYSCORE, DCYTIMES, IRRPROFI, RADDECAY, RESNUCLEi).
Input Commands 227
5. A more flexible way to score by region stars, deposited energy etc. is “region binning” by means of option
USRBIN (7.77, Note 15). However, note that in that case the results are not normalised per unit volume.
6. In Fluka, stars do not include spallations due to annihilating particles.
7. SCORE does not define scoring done via USRBDX, USRBIN, USRCOLL and USRTRACK.
Example 1:
* Score stars produced in each region by protons, high-energy neutrons and pions.
* Score also total energy deposition in each region.
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
SCORE 1.0 8.0 209.0 208.0
Example 2:
* Score stars produced by primary particles (i.e., first interactions) in each
* region. Score also in each region stars produced by photons (photonuclear
* reactions) and energy deposited by electromagnetic showers.
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
SCORE 210.0 7.0 211.0
Example 3:
* Score fissions produced in each region by high- and low-energy particles.
* Score also the net neutron production in each region, and the kaon stars.
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
SCORE 220.0 221.0 222.0 215.0
228 SOURCE
7.64 SOURCE
Invokes the use of a user-defined source routine SOURCE to sample the pri-
mary particles.
This option allows to input up to 12 double precision parameters and one character string chosen by the
user, to be passed to the user routine SOURCE. To pass more than 6 parameters, two successive SOURCE cards
are required.
First card:
WHAT(1) . . . WHAT(6): user parameters to be passed to the SOURCE routine as a double precision
subarray WHASOU(1). . . WHASOU(6) (via COMMON SOURCM).
SDUM = any user dependent character string (not containing “ & ”), to be passed to the SOURCE
routine as a character variable SDUSOU (via COMMON CHEPSR).
WHAT(1) . . . WHAT(6): user parameters to be passed to the SOURCE routine as a double precision
subarray WHASOU(7). . . WHASOU(12).
SDUM = “ & ” in any position in column 71 to 78 (or in the last field if free format is used)
Notes
1. In many simple cases, the primary particle properties can be defined by just two input cards: BEAM (p. 71)
and BEAMPOS (p. 76). The two options define the type of the particle, its energy/momentum (monoenergetic
or simply distributed) and its starting position and direction (also sampled from a few simple distributions
centred around the z-axis). A third option, POLARIZAti (p. 212), can be used to complete the description of
the primaries in case they are totally or partially polarised.
However, there are more complex situations where the type of primary particles and their phase space coor-
dinates must be sampled from different types of distributions, read from files, or generated by a rule decided
by the user. To handle these cases, it is possible to call a user routine SOURCE which can override totally or in
part the definitions given in input. The call is activated by option SOURCE. A default version of the routine,
which leaves any other input definition unchanged, is present in the Fluka library. Instructions on how to
write, compile and link SOURCE are given in 13.2.19.
2. Even when overridden by SOURCE, the momentum or kinetic energy defined by WHAT(1) of option BEAM is
meaningful, since it is taken as maximum energy for several scoring facilities and for cross section tabulations.
Therefore, it is recommended to input in any case a BEAM card with the maximum energy expected in the
current run.
3. The user has the possibility to write a flexible SOURCE routine which can handle different cases without being
recompiled and linked. The 12 WHASOU optional double precision parameters and the SDUSOU character string
can be combined to provide a multitude of possible options.
4. Initialisations (for instance reading data from files, or spectrum normalisation needed for sampling) can be
done in SOURCE itself, as explained in 13.2.19, or in two other user routines USRINI and USRGLO. USRINI is called
every time a card USRICALL (p. 260) is found in input (see 13.2.27), while USRGLO is called before any other
initialisation made by Fluka. Note that more than one primary particle can be sampled in a same call to
SOURCE and loaded into stack for later transport. A further user routine, USREIN (see 13.2.24), which is called
just after SOURCE and before the sampled primary (or primaries) are transported, allows the user to do an
initialisation at the beginning of each event. An event is defined as all the histories of primaries sampled in a
Input Commands 229
single call to SOURCE and of their descendants. Routine USREOU is called instead at the end of each event (see
13.2.25).
5. In old versions of Fluka, the call to SOURCE was requested by means of a flag in card START. This feature has
been discontinued.
Example 1:
* A user-written SOURCE routine is called without passing any parameter.
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
SOURCE
Example 2:
* Here the user passes to the SOURCE routine 7 numerical values and one
* character string. These can be used as free parameters inside the routine,
* or as flags/switches to select between different options
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
SOURCE 12.58 1.0 -14. 0.987651 100. 365.FLAG18
SOURCE 999.2 &
230 SPECSOUR
7.65 SPECSOUR
defines one of the following special sources:
– two colliding beams
– Galactic Cosmic Rays
– Solar Particle Event
– synchrotron radiation
See also BEAM, BEAMAXES, BEAMPOSit, GCR–SPE, HI–PROPErt, POLARIZAti, SOURCE, USRICALL, USRGCALL
This option allows to input up to 18 double precision parameters, depending on the option specified by
SDUM. A first continuation card, if required, is marked by a ”&” in any position in column 71 to 78, or in
the last field if free format is used. A possible second continuation card is similarly marked by ”&&”
For SDUM = PPSOURCE or ppsource, or CROSSASY or CROSSSYM, the source is produced by two colliding
beams. The description of these options is presented separately in Chapter 15.
Cosmic ray sources are requested by one of the following SDUM values:
– Galactic Cosmic Rays: GCR–IONF, GCR–SPEC, GCR–ALLF
– Solar Particle Events: SPE–SPEC, SPE–2003, SPE–2005
The description of these options is presented separately in Chapter 16.
For SDUM = SYNC-RAD or SYNC-RDN, the source consists in synchrotron radiation photons produced by
a charged particle travelling in a magnetic field. A description of this option is presented separately in
Chapter 17.
Input Commands 231
7.66 START
Defines the termination conditions, gets a primary from a beam or from a
source and starts the transport.
WHAT(4) = 1.0: a core dump is triggered when the built-in abort routine, FLABRT, is called
WHAT(5) = 0.0: a line reporting the number of calls to the random number generator (in hexadecimal
form) is printed at the beginning of each history only for the first ones, and then with
decreasing frequency
> 0.0: the number of calls is printed at the beginning of each history.
Default (option START not given): the other input cards are read and an echo is printed on the standard
output, but no actual simulation is performed. However two input cards, both related to geom-
etry, have an effect even if no START card is present: GEOEND (p. 139) (with SDUM = DEBUG)
and PLOTGEOM (p. 209). In all cases in which particles are transported, START must always
be present.
Notes
1. The interactive time limit indicated by WHAT(6) can be used only on some systems which can provide a signal
when the time limit is approaching. On personal workstations, generally no time limit is enforced.
2. It is also possible to terminate a Fluka run before the pre-set time has expired or the total number of histories
has been completed. To this effect, the user may create a “stopping file” (its content is irrelevant, since only
its existence is checked by the program).
When the program detects the existence of such a file, the time left is set to zero, the run is terminated at the
end of the current history, and the stopping file itself is erased.
The name of the stopping file is fluka.stop or rfluka.stop on UNIX. While the presence of fluka.stop
terminates only the current job, rfluka.stop also skips the successive jobs requested via the rfluka script.
Example:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
START 70000.0
* Request a run of 70000 primary particles
232 STEPSIZE
7.67 STEPSIZE
Sets the minimum and maximum step size on a region-by-region basis for
transport of all charged particles (hadrons, muons and electrons)
WHAT(1) ≥ 0.0: minimum step size in cm (overrides region by region the overall minimum step set by
WHAT(2) of MGNFIELD (if larger than it)
< 0.0: whatever happens, the location of boundary intersections will be found with an un-
certainty ≤ |WHAT(1)|. Of course, for a given boundary, this value should be applied
to both regions defining the boundary.
Default : no default
WHAT(3) = lower bound of the region indices in which the indicated step size is to be applied
(“From region WHAT(3). . . ”)
Default = 2.0
WHAT(4) = upper bound of the region indices in which the indicated step size is to be applied
(“. . . to region WHAT(4). . . ”)
Default = WHAT(3)
Default (option STEPSIZE not given): the above defaults apply in all regions (10 cm with magnetic
field, 105 cm without)
Notes
1. This option differs from EMFFIX (p. 118) and FLUKAFIX (p. 133) for the following main reasons:
(a) it is given by region rather than by material
(b) therefore, it is effective also in vacuum regions
(c) the maximum step is determined in an absolute way (i.e., in cm) rather than in terms of maximum energy
loss
(d) it allows to set not only a maximum but also a minimum step length. This may be necessary in order to
avoid the forcing of extremely small steps when a low-energy charged particle spirals in a magnetic field
Option MGNFIELD (p. 172, Note 6) offers a similar possibility, but not tuned by region.
2. Option STEPSIZE may be essential in and around regions of very small dimensions or in regions where a
magnetic field is present (and is rarely required otherwise).
3. The maximum step size for a given region can be decided from the following considerations:
– in a region with magnetic field it should not be larger than the minimum dimension of the region itself
and of its neighbour regions. Obviously, it should also be larger than the minimum step possibly imposed
by MGNFIELD or by STEPSIZE itself.
– in a non-vacuum region, it should not be larger than about one-third of its minimum dimension, in order
to allow the multiple scattering algorithm to work in optimal conditions.
Input Commands 233
7.68 STERNHEIme
Allows to input Sternheimer density effect parameters
SDUM = index of the material to which the above Sternheimer parameters apply. Exceptionally, here
SDUM must be an integer number, in free format, rather than a character string.
Default (option STERNHEIme not given): density effect parameters are computed according to the
Sternheimer-Peierls general formula
Notes
1. For gases the parameters are supposed to be given at 1.0 atm (NTP); the code takes care to scale them to the
actual pressure as given by the MAT–PROP card (p. 165).
2. MAT–PROP can be used also to override the value of the average ionisation potential used by the program.
Recommended Sternheimer parameters and ionisation potentials are automatically provided by the program
for elemental materials. For compounds and mixtures, see [195].
3. STERNHEIme is one of the two Fluka options where SDUM is used to input numerical data. (Actually, the
material number is first read as a string and then an internal reading is performed on the string to get the
number).
Example:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
MATERIAL 29. 0.0 8.96 12. 0.0 0. COPPER
STERNHEIme 4.4190 -0.0254 3.2792 0.14339 2.9044 0.08 12
* Use the copper Sternheimer parameters published in At. Data Nucl. Data
* Tab. 30, 261-271 (1984)
Input Commands 235
7.69 STOP
Stops the execution of the program
Default (option STOP not given): no effect (the program stops at the end of the run when the conditions
set in the START command (p. 231) are satisfied).
Notes
1. Inserted at any point in a Fluka input sequence before the START command, a card STOP interrupts input
reading and de-activates all the following cards. It can thus help in debugging input. After START, its presence
is optional and has no effect.
2. When running the geometry debugger or plotting a slice of the geometry, it is often convenient to place a
STOP command just after the GEOEND cards or after the PLOTGEOM (p. 209) input. Otherwise, once the
debugging or plotting has been completed, Fluka would continue reading input cards and eventually would
start particle transport as soon as a card START is found.
Example:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
GEOEND 150. 75. 220. 30. 0. -220.DEBUG
GEOEND 120. 1. 110. 0. 0. 0. &
STOP
* Debugs the geometry and stops without starting a simulation
236 TCQUENCH
7.70 TCQUENCH
Sets time cutoffs and/or quenching factors when scoring using the USRBIN
or the EVENTBIN options.
WHAT(4) = lower index bound of binnings (or corresponding name) in which the requested scoring time
cutoff and/or Birks law coefficients must be applied
(“From binning WHAT(4). . . ”)
Default = 1.0
WHAT(5) = upper index bound of binnings (or corresponding name) in which the requested scoring time
cutoff and/or Birks law coefficients must be applied
(“. . . to binning WHAT(5). . . ”)
Default = WHAT(4)
Default (option TCQUENCH not given): no time cutoff for scoring and no quenching of dose binning
Notes
1. Binnings are numbered sequentially in the order they are input via the USRBIN or EVENTBIN options. Of
course, for quenching to be applied the quantity binned must be energy (generalised particle 208 or 211). The
energy deposited in a charged particle step is “quenched” according to Birks law, i.e., it is weighted with a
factor dependent on stopping power S = dE/dx:
dE
dE 0 =
1 + BS + CS 2
with B = first Birks parameter and C = Chou or second Birks parameter [40].
2. The time cutoff is useful in order to score only within a time gate between t = 0 and t = tcutoff
3. The scoring time cutoff should not be confused with the time cutoff for transport (see TIME–CUT, p. 239).
Particles outside the time gate defined by TCQUENCH are still transported and can contribute to scoring in
regions and in binnings having a different time scoring cutoff.
4. If the source has been defined as a radioactive isotope (command BEAM with SDUM = ISOTOPE), transport
of each isotope decay secondary starts with an age equal to the time of decay.
Example:
Input Commands 237
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
TCQUENCH 20.0 7.35E-3 1.45E-5 4.0 10.0 3.0
* Set a 20 sec time scoring cutoff for binnings 4, 7 and 10, and apply to
* them the NE213 scintillator Birks parameters published by Craun and
* Smith, Nucl. Instr. Meth. 80, 239 (1970)
238 THRESHOLd
7.71 THRESHOLd
Defines the energy threshold for star density scoring.
Sets thresholds for elastic and inelastic hadron reactions.
Default (option THRESHOLd not given): the threshold for star scoring is set at 20 MeV for protons and
neutrons, and at 50 MeV for all other hadrons
Notes
1. The possibility to change the threshold for elastic scattering or inelastic collisions (WHAT(3) and WHAT(4))
is not to be used in normal transport problems, but is made available to investigate the relative importance of
different processes on the studied problem.
2. For reasons explained in Note 3 to command SCORE (7.63), it is recommended to set the threshold for star
scoring (WHAT(6)) equal to 0.050 GeV overriding the default of 0.020 GeV. A 0.050 GeV cutoff was used in
the past to establish so-called ω-factors which are still currently used to estimate induced activity (see [107,
108, 153]). Stars defined in this way do not include those produced by annihilating particles.
3. The threshold for star scoring requested by option THRESHOLd applies only to the output related to options
SCORE and USRBIN (7.77). The number of stars reported in the summary statistics at the end of the standard
output is always based on the actual energy thresholds for nonelastic interactions (except for neutrons for which
stars are reported above 20 MeV).
Example 1:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
THRESHOLd 0.0 0.0 2.0 0.0 0.0 0.0
* Switch off elastic scattering of hadrons below 2 GeV
Example 2:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
THRESHOLd 0.0 0.0 0.0 0.0 0.0 0.05
* Score stars only above 50 MeV
Input Commands 239
7.72 TIME–CUT
Sets transport time cutoffs for different particles
WHAT(3) = material number for the “start” signal from which time is measured (useful for calorimetry).
Not implemented at present
Default: “start” at creation time
WHAT(4) = lower bound of the particle numbers (or corresponding name) for which the transport time
cutoff and/or the start signal is to be applied
(“From particle WHAT(4). . . ”)
Default = 1.0
WHAT(5) = upper bound of the particle numbers (or corresponding name) for which the transport time
cutoff and/or the start signal is to be applied
( “. . . to particle WHAT(5). . . ”)
Default = WHAT(4) if WHAT(4) < 0.0, all particles otherwise
Default : (option TIME–CUT not given): no time cutoff for particle transport
Notes
1. The transport time cutoff defined by TIME–CUT should not be confused with the time cutoff for scoring defined
by TCQUENCH (see Note 3 to option TCQUENCH, p. 236).
2. Particles outside the time gate defined by TIME–CUT are discarded (a summary is printed at the end of the
standard output).
3. If the source has been defined as a radioactive isotope (command BEAM with SDUM = ISOTOPE), transport
of each isotope decay secondary starts with an age equal to the time of decay.
Example:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
TIME-CUT 3000.0 0.0 0.0 8.0 0.0 0.0
* Stop transporting neutrons after 3000 nsec
240 TITLE
7.73 TITLE
Defines the title of the run
Notes
1. The title of the run must be given on the following card. Only one title may exist: if more than one is given,
only the last one is retained. The title is printed at the top of the standard output and of each estimator
output.
2. Giving a title is not mandatory, but it is recommended in order to identify the current run. The title is printed
on all estimator files.
Example:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
TITLE
Neutron background: 5 GeV electrons on a Cu slab 10 cm thick, first test
Input Commands 241
7.74 USERDUMP
Defines a collision “tape” (or better a collision file, or a phase space file) to
be written.
This command activates calls to the user routine MGDRAW and to its entries BXDRAW, EEDRAW, ENDRAW,
SODRAW, USDRAW (see description in 13.2.13).
The default version of the routine writes a complete dump (unformatted) of one or more of the following:
Users can modify the routine by removing the existing lines of code and by writing their own code under
one or more of the entries.
For SDUM 6= UDQUENCH:
WHAT(1) ≥ 100.0 : calls to MGDRAW and/or its entries are activated as directed by the values of WHAT(3)
and WHAT(4)
= 0.0: ignored
< 0.0: the default is reset, i.e., no dump is written
> 0.0 and < 100.0: not allowed! Originally used to request another form of collision tape.
Presently only the “new” form of collision tape is possible (the old one being incom-
patible with the present version of Fluka)
WHAT(2) : if the default version of MGDRAW is used, number of the unformatted output unit. Values
of WHAT(2) < 21.0 must be avoided because of possible conflicts with Fluka pre-defined
units.
If a user version is used, the output file can be defined as formatted or unformatted, and the
unit number can be defined by an explicit Fortran OPEN statement in MGDRAW.
Default : 49.0
≤ 0.0: source particles, trajectories, continuous and local energy losses are all dumped
= 1.0: only source particles are dumped
= 2.0: only trajectories and continuous energy losses are dumped
= 3.0: only local energy losses are dumped (e.g., heavy recoil kerma, cutoff energy). Proton
recoils are not included (since recoil protons are transported by Fluka)
= 4.0: source particles, trajectories and continuous energy losses are dumped
= 5.0: source particles and local energy losses are dumped
= 6.0: trajectories and all energy losses (continuous and local) are dumped
≥ 7.0: source particles, trajectories, continuous and local energy losses are not dumped (but
user-defined dumps required by WHAT(4) are unaffected)
if a user version is used:
≤ 0.0: call to MGDRAW at each particle step and at each occurrence of a continuous energy
loss, to ENDRAW at each local energy loss, to SODRAW every time a source particle is
started, to BXDRAW at each boundary crossing, and to EEDRAW at each end of event.
= 1.0: calls to SODRAW and EEDRAW
242 USERDUMP
≥ 7.0: no calls to MGDRAW, SODRAW, ENDRAW, EEDRAW, BXDRAW (but calls to USDRAW and
EEDRAW requested by WHAT(4) are unaffected)
Default = 0.0 (calls are made to MGDRAW, ENDRAW, SODRAW, BXDRAW and EEDRAW, provided
WHAT(1) ≥ 100.0)
WHAT(4) ≥ 1.0: user-defined dumps after collisions are activated (calls to USDRAW and EEDRAW)
= 0.0: ignored
< 0.0: resets to default (user dependent dumps after collisions are de-activated)
SDUM = name of the output file (max. 10 characters). The user can define a longer name by an
explicit Fortran OPEN statement in MGDRAW.
WHAT(1) : BRKMG1(1), First Birks parameter for quenching, to be used in MGDRAW for a first
material, in g/(MeV·cm2 )
WHAT(2) : BRKMG2(1), Second Birks parameter for quenching, to be used in MGDRAW for the first
material, in g2 /(MeV2 ·cm4 )
WHAT(3) : BRKMG1(2), First Birks parameter for quenching, to be used in MGDRAW for a second
material, in g/(MeV·cm2 )
WHAT(4) : BRKMG2(2), Second Birks parameter for quenching, to be used in MGDRAW for the
second material, in g2 /(MeV2 ·cm4 )
WHAT(5) : BRKMG1(3), First Birks parameter for quenching, to be used in MGDRAW for a third
material, in g/(MeV·cm2 )
WHAT(6) : BRKMG2(3), Second Birks parameter for quenching, to be used in MGDRAW for the third
material, in g2 /(MeV2 ·cm4 )
SDUM = UDQUENCH
Note
1. The format of the default binary collision tape, the code number of the events and the variables written for
the different type of events, are described in Chap. 11.
Be careful, if the default version of MGDRAW is used, the amount of output can be enormous.
2. The default options described above can be changed by modifying the user routine MGDRAW (see description
in 13.2.13)
3. Quenching, as requested with SDUM = UDQUENCH, can be applied to energy deposition (local at ENDRAW
calls or continuous at MGDRAW calls). Some information about Birks law can be found in Note 1 to option
TCQUENCH
Input Commands 243
Example:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
USERDUMP 200. 37.0 2.0 TRAKFILE
* It is requested to write a binary file TRAKFILE, containing all trajectories and
* continuous energy losses, and pre-connected to the logical output unit 37.
244 USERWEIG
7.75 USERWEIG
Defines the extra weighting applied to yields scored via the USRYIELD op-
tion, energy and star densities obtained via USRBIN, energy deposition and
and star production obtained via EVENTBIN, production of residual nuclei
obtained via RESNUCLEi, currents calculated by means of USRBDX, and flu-
ences calculated by means of USRBDX, USRTRACK, USRCOLL and USRBIN.
WHAT(3) > 0.0: yields obtained via USRYIELD and fluences or currents calculated with USRBDX,
USRTRACK, USRCOLL, USRBIN are multiplied by a user-supplied function FLUSCW at
scoring time (see p. 352).
1.0 ≤ WHAT(3) ≤ 2.0: FLUSCW is called before any check on the current detector (see Note 5)
> 2.0: FLUSCW is called only after checking that the current detector applies (see Note 5)
= 2.0 or 4.0: The routine FLDSCP is also called, applying a shift to the current binned track
= 0.0: ignored
WHAT(5) > 0.0: the USRRNC user subroutine is called every time a residual nucleus is generated
WHAT(6) > 0.0: energy and star densities obtained via SCORE (p. 226) and USRBIN (p. 249), as well as
energy deposition and star production obtained via EVENTBIN (p. 124) are multiplied
by a user-supplied function COMSCW at scoring time (see p. 350).
1.0 ≤ WHAT(6) ≤ 2.0: COMSCW is called before any check on the current detector (see Note 5)
> 2.0: COMSCW is called only after checking that the current detector applies (see Note 5)
= 2.0 or 4.0: The routine ENDSCP is also called, applying a shift to the current binned energy
loss
< 0.0: resets the default: no weighting
= 0.0: ignored
Default (option USERWEIG not given): no extra weighting is applied to any scored quantity
Input Commands 245
Notes
1. These weights are really extra, i.e., the results are multiplied by these weights at scoring time, but printed
titles, headings and normalisations are not necessarily valid. It is the user’s responsibility to interpret correctly
the output. Actually, it is recommended to insert into standard output a user-written notice informing about
the extra weighting
2. Setting the incident particle weight to a value different from 1.0 (in the BEAM card, p. 71) will not affect the
results, since the latter are always normalised to unit primary weight.
3. Note that USRBIN (p. 249) can be used to calculate star or energy density, and in this case function COMSCW
has to be used. But when using USRBIN to calculate track-length fluences, the function to be used is FLUSCW.
4. Note that functions FLUSCW, and COMSCW can contain user-written logic to tune the multiplication factor (which
can have even a value = 0.0 or 1.0 !) according to position in space, direction, energy, material, type of
particle, time, binning number etc. This allows to score only under certain conditions, or in any case to extend
considerably the capability of the code. Similar possibilities exist for the offset provided by routines FLDSCP
and ENDSCP.
5. For some applications, a call to the user routines FLUSCW or COMSCW is desired independently of whether the
current detector applies. But in general this is not the case and it is convenient to check first that a score
actually is taking place, saving a large number of function calls. Different values of WHAT(3) and WHAT(6)
allow the user to choose one of the two possibilities.
6. User-written functions FLUSCW, COMSCW, FLDSCP, ENDSCP and USRRNC are described in Chap. 13.
Examples:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....
USERWEIG 0. 0. 0. 0. 0. 1.
* Dose and star densities will be multiplied by a value returned by
* function COMSCW according to the logic written by the user.
* No check on the detector is done before calling the function.
USERWEIG 0. 0. 4. 0. 0. 0.
* Fluences and currents will be multiplied by a value returned by
* function FLUSCW according to the logic written by the user.
* The function will be called only for detectors to which the present
* score applies.
USERWEIG 0. 0. 0. 0. 1. 0.
* Residual nuclei scores will be accomplished by subroutine USRRNC according to
* the logic written by the user.
246 USRBDX
7.76 USRBDX
Defines a detector for a boundary crossing fluence or current estimator
The full definition of the detector may require two successive cards (the second card, identified by the
character “ & ” in any column from 71 to 78 (or in the last field in case of free input format), must be given
unless the corresponding defaults are acceptable to the user)
First card:
WHAT(4) = first region defining the boundary (in case of one-way scoring this is the upstream region)
Default = 1.0
WHAT(5) = second region defining the boundary (in case of one-way scoring this is the downstream
region)
Default = 2.0
SDUM = any character string (not containing “ & ”) identifying the detector (max. 10 characters)
Continuation card:
WHAT(5) : If linear angular binning: minimum solid angle for scoring (sr)
Default = 0.0
If logarithmic angular binning: solid angle of the first bin (sr)
Default = 0.001
SDUM = “ & ” in any position in column 71 to 78 (or in the last field if free format is used)
Notes
1. The formatted results of a USRBDX boundary crossing estimator, and the results written unformatted and
converted to formatted by the post-processing program $FLUPRO/flutil/usxrea.f, are given as double differ-
ential distributions of fluence (or current) in energy and solid angle, in units of cm−2 GeV−1 sr−1 per incident
primary, even when only 1 interval (bin) has been requested, which is often the case for angular distributions.
Thus, for example, when requesting a fluence or current energy spectrum, with no angular distribution, to
obtain integral binned results (fluence or current in cm−2 per energy bin per primary) one must multiply the
value of each energy bin by the width of the bin (even for logarithmic binning), and by 2π or 4π (depending
on whether one-way or two-way scoring has been requested).
If the results have been written unformatted, and processed by the post-processing program
$FLUPRO/flutil/usxsuw.f, two different files are produced, respectively with extension sum.lis and tab.lis.
The results reported in the sum.lis file are given as double differential distributions of fluence (or current) in
energy and solid angle, in units of cm−2 GeV−1 sr−1 per incident primary, even when only 1 interval (bin) has
been requested, in the same way as the formatted results described above. Instead, the results reported in the
tab.lis file are integrated over solid angle, as specified in the titleline. This is also true for a post-processing
done by Flair, which is based on the tab.lis file.
2. Angular distributions must be intended as distributions in cos θ, where θ is the angle between the particle
trajectory and the normal to the boundary at the point of crossing.
When logarithmic scoring is requested for angular distributions, all intervals have the same logarithmic width
(equal ratio between upper and lower limit of the interval), except the first one. The limits of the first angular
interval are θ = 0 and the value indicated by the user with WHAT(5) in the second USRBDX card.
3. If the generalised particle is 208.0 (ENERGY) or 211.0 (EM–ENRGY), the quantity scored is differential energy
fluence (if cosine-weighted) or differential energy current (energy crossing the surface). In both cases the
quantity will be expressed in GeV per cm2 per energy unit per steradian per primary. That can sometimes lead
to confusion since GeV cm−2 GeV−1 sr−1 = cm−2 sr−1 , where energy does not appear. Note that integrating
over energy and solid angle one gets GeV/cm2 .
4. The maximum number of boundary crossing detectors that the user can define is 1100.
5. The logical output unit for the estimator results (WHAT(3) of the first USRBDX card) can be any one of the
following:
– the standard output unit 11: estimator results will be written on the same file as the standard Fluka
output.
– a pre-connected unit (via a symbolic link on most UNIX systems, ASSIGN under VMS, or equivalent
commands on other systems)
– a file opened with the Fluka command OPEN
– a file opened with a Fortran OPEN statement in a user-written initialisation routine such as USRINI, USRGLO
or SOURCE (see 13.2.27, 13.2.26, 13.2.19)
248 USRBDX
– a dynamically opened file, with a default name assigned by the Fortran compiler (typically fort.xx or
ftn.xx, with xx equal to the chosen logical output unit number).
The results of several USRBDX detectors in a same Fluka run can be written on the same file, but of course
only if they are all in the same mode (all formatted, or all unformatted).
It is also possible in principle to write on the same file the results of different kinds of estimators (USRTRACK,
USRBIN, etc.) but this is not recommended, especially in the case of an unformatted file, because it would
make very difficult any reading and analysis.
6. When scoring neutron fluence or current, and the requested energy interval structure overlaps with that of the
low energy neutron groups, interval boundaries are forced to coincide with group boundaries and no interval
can be smaller than the corresponding group. Actually, the program uses the requested energy limits and
number of intervals to estimate the desired interval width. The number of intervals above the upper limit of
the first low-energy neutron group is recalculated according to such width. To preserve the requested upper
energy limit, the width of the first interval above the low energy group may be smaller than that of the others.
Note that the lowest energy limit of the last neutron group is 10−14 GeV (10−5 eV) for the 260 data set. All
group energy boundaries are listed in Table 10.1 on p. 323.
7. If the scored fluence or current is that of a generalised particle which includes neutrons (e.g., ALL-PART,
ALL-NEUT, NUCLEONS, NUC&PI+-, HAD-NEUT, and even ENERGY), the spectrum is presented in two separate tables.
One table refers to all non-neutron particles and to neutrons with energies > 20 MeV. The second table refers
only to neutrons with energy < 20 MeV, and its interval structure is that of the neutron energy groups.
In case an interval crosses 20 MeV, it will include the contribution of neutrons with energy < 20 MeV and not
that of neutrons with energy > 20 MeV.
8. A program Usxsuw is available with the normal Fluka code distribution in directory $FLUPRO/flutil. Usx-
suw reads USRBDX results in binary form from several runs and allows to compute standard deviations. It
returns double differential and cumulative fluence, with the corresponding percent errors, in a file, and in
another file formatted for easy plotting. It also returns a binary file that can be read out in turn by Usxsuw.
The content of this file is statistically equivalent to that of the sum of the files used to obtain it, and it can
replace them to be combined with further output files if desired (the Usxsuw program takes care of giving it
the appropriate weight).
Example:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
USRBDX 101.0 ANEUTRON 21.0 3.0 4.0 400.0 AntiNeu
USRBDX 5.0 0.0 200.0 0.0 0.0 0.0 &
* Calculate fluence spectrum from 0 to 5 GeV, in 200 linear energy intervals,
* of antineutrons passing from region 3 to region 4 (and not from 4 to 3).
* Write formatted results on unit 21. The area of the boundary is 400 cm2.
* A single angular interval is requested (from 0 to 2pi)
Input Commands 249
7.77 USRBIN
See also SCORE (scoring by region), EVENTBIN (event-by-event scoring) and USRBDX, USRCOLL, USRTRACK,
USRYIELD (fluence estimators)
The full definition of the detector may require two successive cards. The second card, identified by
the character “ & ” in any column from 71 to 78 (or in the last field in case of free format input), must be
given unless the corresponding defaults are acceptable to the user.
First card:
WHAT(1) : code indicating the type of binning selected. Each type is characterised by a number of
properties:
• structure of the mesh (spatial: R–Z, R–Φ–Z, Cartesian, or special — by region, or
user-defined)
• quantity scored:
– density of energy deposited (total or electromagnetic only)
– dose (total or electromagnetic only)
– star density
– fission density (total, high energy or low energy)
– neutron balance
– activity
– specific activity
– displacements per atom
– density of non ionising energy losses (restricted or unrestricted)
– dose equivalent: convoluting fluence with conversion coefficients or multiplying
dose by a LET-dependent quality factor
– density of momentum transfer
– density of net charge
– fluence (track-length density)
– silicon 1 MeV-neutron equivalent fluence
– high energy hadron equivalent fluence
– thermal neutron equivalent fluence
• method used for scoring (old crude algorithm where the energy lost in a step by a
charged particle is deposited in the middle of the step, or accurate algorithm where the
energy lost is apportioned among different bins according to the relevant step fraction
- see more in Note 14)
• mesh symmetry (no symmetry, or specular symmetry around one of the coordinate
planes, or around the origin point)
250 USRBIN
0.0 ≤ WHAT(1) ≤ 8.0: Quantities scored at a point. In the case of various types of energy
deposition, quantities calculated using the old algorithm where the energy lost in a
step by a charged particle is deposited at the step midpoint (see Note 14).
Quantity scored:
– if WHAT(2) = 208 (ENERGY), 211 (EM–ENRGY), 228 (DOSE), 229 (UNB–ENER),
230 (UNB–EMEN), 238 (NIEL–DEP), 239 (DPA–SCO), 241 (DOSE–EM), 243
(DOSEQLET) or 244 (RES–NIEL): energy or non ionising energy density or dis-
placements per atom or dose equivalent calculated with a Quality Factor
– if WHAT(2) = 219 (FISSIONS), 220 (HE–FISS) or 221 (LE–FISS): fission density
– if WHAT(2) = 222 (NEU–BALA): neutron balance density
– if WHAT(2) = 231 (X–MOMENT), 232 (Y–MOMENT) or 233 (Z–MOMENT): mo-
mentum transfer density
– if WHAT(2) = 234 (ACTIVITY) or 235 (ACTOMASS): activity or specific activity
– if WHAT(2) = 242 (NET–CHRG): net charge density
– otherwise, density of stars produced by particles (or families of particles) with
particle code or name = WHAT(2)
Not allowed:
– WHAT(2) = 236 (SI1MEVNE), 237 (HADGT20M), 240 (DOSE–EQ), 249
(HEHAD–EQ), 250 (THNEU–EQ)
= 0.0: Mesh: Cartesian, no symmetry
= 1.0: Mesh: R–Z or R–Φ–Z, no symmetry. Φ is the azimuthal angle around the Z axis,
measured from −π to +π relative to the X axis.
= 2.0: Mesh: by region (1 bin corresponds to n regions, with n = 1 to 3)
= 3.0: Mesh: Cartesian, with symmetry ± X (i.e., |x| is used for scoring)
= 4.0: Mesh: Cartesian, with symmetry ± Y (i.e., |y| is used for scoring)
= 5.0: Mesh: Cartesian, with symmetry ± Z (i.e., |z| is used for scoring)
= 6.0: Mesh: Cartesian, with symmetry around the origin (i.e., |x|, |y| and |z| are used
for scoring)
= 7.0: Mesh: R–Z or R–Φ–Z, with symmetry ± Z (i.e., |z| is used for scoring)
= 8.0: Special user-defined 3-D binning. Two variables are discontinuous (e.g. region num-
ber), the third one is continuous, e.g. a user-defined function of the space coordinates
or of some energy or angular quantity. See 13.2.9.
Variable Type Default Override routine
1st integer region number MUSRBR
2nd integer lattice cell number LUSRBL
3rd continuous 0.0 FUSRBV
Input Commands 251
10.0 ≤ WHAT(1) ≤ 18.0: Quantities scored along a step, apportioned among different bins
according to the relevant step fraction. In particular, in the case of various types
of energy deposition, quantities calculated using the accurate apportioning algorithm
(see Note 14).
Quantity scored:
– if WHAT(2) = 208 (ENERGY), 211 (EM–ENRGY), 228 (DOSE), 229 (UNB–ENER),
230 (UNB–EMEN), 238 (NIEL–DEP), 239 (DPA–SCO), 241 (DOSE–EM), 243
(DOSEQLET) or 244 (RES–NIEL): energy or non ionising energy density (as such
or weighted with a Quality Factor) or displacements per atom
– if WHAT(2) = 236 (SI1MEVNE): fluence weighted by a damage function
– if WHAT(2) = 240 (DOSE–EQ): Dose equivalent, calculated by folding fluence
with conversion coefficients
– if WHAT(2) = 249 (HEHAD–EQ) or 250 (THNEU–EQ): high energy hadron equiv-
alent or thermal neutron equivalent fluence
– otherwise: fluence (track-length density) of particles (or families of particles)
with particle code or name = WHAT(2)
Not allowed:
– WHAT(2) = 219.0 (FISSIONS), 220.0 (HE–FISS), 221.0 (LE–FISS), 222.0
(NEU–BALA), 231.0 (X–MOMENT), 232.0 (Y–MOMENT), 233.0 (Z–MOMENT),
234.0 (ACTIVITY), 235.0 (ACTOMASS), 242.0 (NET–CHRG)
= 10.0: Mesh: Cartesian, no symmetry
= 16.0: Mesh: Cartesian, with symmetry around the origin (|x|,|y|, |z| used for scoring)
= 17.0: Mesh: R–Z or R–Φ–Z, with symmetry ± Z (|z| used for scoring)
= 18.0: Special user-defined 3-D binning. Two variables are discrete (e.g., region number),
the third one is continuous, e.g. a user-defined function of the space coordinates or
of some energy, time or angular quantity. See 13.2.9.
Variable Type Default Override routine
1st integer region number MUSRBR
2nd integer lattice cell number LUSRBL
3rd continuous 0.0 FUSRBV
Default = 0.0 (Cartesian scoring without symmetry, star density or energy density
deposited at midstep with the old algorithm)
252 USRBIN
– If WHAT(1) < 10.0, the binning will score with the old algorithm energy density
or non ionising energy density (as such or weighted with a damage function or a
Quality Factor), or displacements per atom.
– If WHAT(1) ≥ 10.0, the apportioning algorithm will be used (more accurate, see
Note 14).
– If WHAT(2) = 219 (FISSIONS), 220 (HE–FISS) or 221 (LE–FISS), and WHAT(1) < 10,
the binning will score fission density.
WHAT(1) ≥ 10 is not allowed.
– If WHAT(2) = 222 and WHAT(1) < 10, neutron balance density will be scored.
WHAT(1) ≥ 10 is not allowed.
– If WHAT(2) = 231 (X–MOMENT), 232.0 (Y–MOMENT) or 233.0 (Z–MOMENT), and
WHAT(1) < 10, the binning will score density of momentum transfer.
WHAT(1) ≥ 10 is not allowed.
– If WHAT(2) = 234 (ACTIVITY) or 235.0 (ACTOMASS), and WHAT(1) < 10, the binning
will score activity or specific activity.
WHAT(1) ≥ 10 is not allowed.
– If WHAT(2) = 240 (DOSE–EQ) and WHAT(1) ≥ 10, the binning will score dose equivalent
calculated as convolution of particle fluences and conversion coefficients (see option
AUXSCORE. p. 68).
WHAT(1) < 10 is not allowed.
– If WHAT(2) = 242 (NET–CHRG), the binning will score density of net charge deposition.
WHAT(1) ≥ 10 is not allowed.
– If WHAT(2) = 249 (HEHAD–EQ) and WHAT(1) ≥ 10, the binning will score high energy
hadron equivalent fluence.
WHAT(1) < 10 is not allowed.
– If WHAT(2) = 250 (THNEU–EQ) and WHAT(1) ≥ 10, the binning will score thermal
neutron equivalent fluence.
WHAT(1) < 10 is not allowed.
– Any other particle (or family of particles) requested will score:
– If WHAT(1) < 10.0, density of stars produced by particles (or family of particles)
with particle code (or name) = WHAT(2). Of course, this choice is meaningful
only for particles that can produce stars (hadrons, photons and muons).
– If WHAT(1) ≥ 10.0, fluence of particles (or family of particles) with particle code
(or name) = WHAT(2).
Note that it is not possible to score energy fluence with this option alone (it is possible,
however, by writing a special version of the user routine FLUSCW — see 13.2.6)
Default: No default
WHAT(1) = For Cartesian binning: Xmin (if X symmetry is requested, Xmin cannot be negative)
For R–Z and R–Φ–Z binning: Rmin
For region binnings, first region of the first region set. Default: equal to last region
(= WHAT(4) in the first USRBIN card)
For special binnings, lower limit of the first user-defined variable (first region if the default
version of the MUSRBR routine is not overridden)
Default = 0.0
WHAT(2) = For Cartesian binning: Ymin (if Y symmetry is requested, Ymin cannot be negative)
For R–Z and R–Φ–Z binning: X coordinate of the binning axis.
For region binnings, first region of the second region set. Default: equal to last region
(= WHAT(5) in the first USRBIN card)
For special binnings, lower limit of the second user-defined variable (first lattice cell if the
default version of the LUSRBL routine is not overridden)
Default = 0.0
WHAT(3) = For Cartesian, R–Z and R–Φ–Z binnings: Zmin (if Z symmetry is requested, Zmin cannot be
negative)
For region binnings, first region of the third region set. Default: equal to last region
(= WHAT(6) in the first USRBIN card)
For special binnings, lower limit of the third user-defined variable (0.0 if the default version
of the FUSRBV routine is not overridden)
Default = 0.0
WHAT(4) = For Cartesian binning: number of X bins. (Default: 30.0)
For R–Z and R–Φ–Z binning: number of R bins (default: 50.0)
For region binnings, step increment for going from the first to the last region of the first
region set. (Default: 1.0)
For special binnings, step increment for going from the first to the last “region” (or similar).
(Default: 1.0)
254 USRBIN
Notes
1. A binning is a regular spatial mesh completely independent from the regions defined by the problem’s geometry.
As an extension of the meaning, ”region binnings” and ”special user-defined binnings” are also defined, where
the term indicates a detector structure not necessarily regular or independent from the geometry. On user’s
request, Fluka can calculate the distribution of several different quantities over one or more binning structures,
separated or even overlapping.
The following quantities can be “binned”:
– energy density, total or deposited by e+ e− γ only
– dose (energy per unit mass), total or deposited by e+ e− γ only
– star density (density of hadronic inelastic interactions)
– particle track-length density (fluence)
– dose equivalent (fluence convoluted with fluence-to-dose equivalent conversion coefficients, or dose multi-
plied by a LET-dependent Quality Factor)
– activity (per unit volume) or specific activity (per unit mass)
– density of total, high-energy and low-energy fissions
– density of neutron balance (algebraic sum of outgoing neutrons minus incoming neutrons for all interac-
tions)
– density of unbiased energy (physically meaningless but useful for setting biasing parameters and debug-
ging)
– density of momentum transfer components on the three axes
– DPA (Displacements Per Atom)
– density of Non Ionising Energy Losses deposited, unrestricted and restricted (i.e., larger than the DPA
threshold)
– silicon 1 MeV-neutron equivalent fluence
– high energy hadron equivalent fluence (see Note 5.1, p. 46)
– thermal neutron equivalent fluence (see Note 5.1, p. 46)
– density of net charge deposited
The available binning shapes are Cartesian (3-D rectangular, with planes perpendicular to the coordinate axes),
R–Z (2-D cylindrical, with the cylinder axis parallel to the z-axis), and R–Φ–Z (3-D cylindrical).
2. It is possible to define also binnings with an arbitrary orientation in space, by means of options ROT–DEFIni
(p. 221) and ROTPRBIN (p. 224).
3. A star is a hadronic inelastic interaction at an energy higher than a threshold defined via the option THRESHOLd
(p. 238) (or by default higher than the transport threshold of the interacting particle). Star scoring (traditionally
used in most high-energy shielding codes) can therefore be considered as a form of crude collision estimator:
multiplication of star density by the asymptotic value of the inelastic nuclear interaction length gives the fluence
of hadrons having energy higher than the current threshold. However, this is meaningful only if the interaction
length doesn’t vary appreciably with energy; therefore it is recommended to set a scoring threshold = 50 MeV
Input Commands 255
(using option THRESHOLd), since interaction lengths are practically constant above this energy. Besides, star
densities calculated with a 50 MeV threshold are the basis of some old techniques to estimate induced activity
such as the ω-factors (see [200], p. 106), and the prediction of single isotope yields from the ratio of partial to
inelastic cross section. These techniques have now been made obsolete by the capability of Fluka to calculate
directly induced activity and residual nuclei.
4. Selecting star scoring is meaningful for hadrons, photons and muons (if their energy is sufficiently high). Any
other particle will not produce any star. And in Fluka, stars do not include spallations due to annihilating
particles.
The results will be expressed in stars per cm3 per unit primary weight.
5. Energy deposition will be expressed in GeV per cm3 per unit primary weight. Doses will be expressed in GeV/g
per unit primary weight. To obtain dose in Gy, multiply GeV/g by 1.602176462 × 10−7
6. Non Ionising Energy Losses deposited (NIEL–DEP), restricted and unrestricted, will be expressed in GeV per
cm3 3 per unit primary weight.
7. Displacements Per Atom (DPA) will be expressed as average DPAs in each bin per unit primary weight.
8. Fluence will be expressed in particles/cm2 per unit primary weight.
9. Dose equivalent will be expressed in pSv per unit primary weight.
10. Activity will be expressed in Bq/cm3 per unit primary weight. Specific activity will be expressed in Bq/g per
unit primary weight. Scoring activity requires additional commands RADDECAY, IRRPROFI, DCYTIMES and
DCYSCORE.
11. Total, High-energy and Low-energy fissions will be expressed as fissions/cm3 per unit primary weight.
12. Neutron balance density will be expressed as net number of produced neutrons per cm3 3 per unit primary
weight.
13. The results from USRBIN are normalised per unit volume and per unit primary weight, except region binnings
and special user-defined binnings, which are normalised per unit primary weight only, for DPA, which are given
as number of displacements per atom per unit primary weight, averaged over the bin volume, and for dose
equivalent, expressed as pSv per unit primary weight.
In case symmetries are requested, proper rescaled volumes are taken into account for normalisation (that is, an
extra factor 2 is applied to the volume if symmetry around one plane is required, 8 if the symmetry is around
the origin).
14. When scoring energy deposition or dose, i.e., generalised particles 208 (ENERGY), 211 (EM–ENRGY), 228
(DOSE) or 241 (DOSE-EM), it is recommended to set in the first USRBIN card WHAT(1) = 10.0, 11.0,. . . 18.0
(rather than 0.0, 1.0,. . . 8.0).
The difference between the two settings is the following.
With WHAT(1) = 0.0, 1.0,. . . 8.0, the energy lost in a charged particle step is deposited in the bin corre-
sponding to the midpoint of the step: this is the old Fluka algorithm, which is rather inefficient when the
step length is larger than the bin size.
The accurate algorithm, selected by setting WHAT(1) = 10.0, 11.0,. . . 18.0, deposits in every bin traversed
by the step a fraction of energy proportional to the respective chord (track-length apportioning). Statistical
convergence is much faster.
15. When scoring region binning and more than one set of regions is defined, each of the sets (2 or 3) must consist
of the same number of regions. The first bin will contain the sum of what is contained in the first regions of
each set, the second bin the sum of the scores of the second regions, etc.
16. The maximum number of binnings that the user can define is 400.
17. The logical output unit for the estimator results (WHAT(3) of the first USRBIN card) can be any one of the
following:
– the standard output unit 11: estimator results (that in this case need to be formatted) will be written on
the same file as the standard Fluka output.
– a pre-connected unit (via a symbolic link on most UNIX systems, ASSIGN under VMS, or equivalent
commands on other systems)
– a file opened with the Fluka command OPEN
– a file opened with a Fortran OPEN statement in a user-written initialisation routine such as USRINI, USRGLO
or SOURCE (see 13.2.27, 13.2.26, 13.2.19)
– a dynamically opened file, with a default name assigned by the Fortran compiler (typically fort.xx or
ftn.xx, with xx equal to the chosen logical output unit number).
256 USRBIN
The results of several USRBIN detectors in a same Fluka run can be written on the same file, but of course
only if they are all in the same mode (all formatted, or all unformatted).
It is also possible in principle to write on the same file the results of different kinds of estimators (USRBDX,
USRTRACK, etc.) but this is not recommended, especially in the case of an unformatted file, because it would
make very difficult any reading and analysis.
18. In R–Φ–Z binnings, the azimuthal Φ coordinates extend from −π to +π (−180◦ to +180◦ ). Φ = 0 corresponds
to the x-axis.
19. Binning data can be obtained also separately for each “event” (“event” = history of a primary particle and all
its descendants). See option EVENTBIN (p. 124) for details.
20. Two programs, Usbsuw and Usbrea, are available with the normal Fluka code distribution in directory
$FLUPRO/flutil. Usbsuw allows to compute standard deviations over several runs, and returns the standard
deviations and the averages in an unformatted file. Usbrea reads an unformatted file and returns the equivalent
formatted file, including the standard deviations if the input file was produced by Usbsuw.
Example:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
USRBIN 10.0 ELECTRON -25.0 7.0 7.0 12.1 verythin
USRBIN -7.0 -7.0 12.0 35.0 35.0 1.0 &
* Cartesian binning of electron track-length density, to be written
* unformatted on unit 25. Mesh is 35 bins between x = -7 and x = 7, 35 bins
* between y = -7 and y = 7, and 1 bin between z = 12 and z = 12.1
Input Commands 257
7.78 USRCOLL
Defines a detector for a hadron or neutron fluence collision estimator
The full definition of the detector may require two successive cards. The second card, identified by the
character “ & ” in any column from 71 to 78 (or in the last field in case of free format input), must be given
unless the corresponding defaults are acceptable to the user.
First card:
SDUM = any character string (not containing “ & ”) identifying the detector (max. 10 characters)
Continuation card:
WHAT(1) = maximum kinetic energy for scoring (GeV)
Default = beam particle total energy as set by the BEAM (p. 71) option (if no BEAM card
is given, the energy corresponding to 200 GeV/c momentum will be used)
WHAT(2) = minimum kinetic energy for scoring (GeV).Note that the lowest energy limit of the last
neutron group is 10−14 GeV (10−5 eV) for the 260 data set.
Default = 0.0 if linear energy binning, 0.001 GeV otherwise
SDUM = “ & ” in any position in column 71 to 78 (or in the last field if free format is used)
Notes
1. IMPORTANT! the results of a USRCOLL collision estimator are always given as differential distributions
of fluence in energy, in units of cm−2 GeV−1 per incident primary unit weight. Thus, for example, when
requesting a fluence energy spectrum, to obtain integral binned results (fluence in cm−2 per energy bin per
primary) one must multiply the value of each energy bin by the width of the bin (even for logarithmic binning).
2. If the generalised particle is 208.0 (ENERGY) or 211.0 (EM–ENRGY), the quantity scored is differential energy
fluence, expressed in GeV per cm2 per energy unit per primary. That can sometimes lead to confusion since
GeV cm−2 GeV−1 = cm−2 , where energy does not appear. Note that integrating over energy one gets GeV/cm2 .
3. The maximum number of collision + track-length detectors (see option USRTRACK, p. 262) that the user can
define is 400.
4. The logical output unit for the estimator results (WHAT(3) of the first USRCOLL card) can be any one of the
following:
– the standard output unit 11: estimator results will be written on the same file as the standard Fluka
output.
– a pre-connected unit (via a symbolic link on most UNIX systems, ASSIGN under VMS, or equivalent
commands on other systems)
– a file opened with the Fluka command OPEN
– a file opened with a Fortran OPEN statement in a user-written initialisation routine such as USRINI, USRGLO
or SOURCE (see 13.2.27, 13.2.26, 13.2.19)
– a dynamically opened file, with a default name assigned by the Fortran compiler (typically fort.xx or
ftn.xx, with xx equal to the chosen logical output unit number).
The results of several USRCOLL and USRTRACK detectors in a same Fluka run can be written on the same
file, but of course only if they are all in the same mode (all formatted, or all unformatted).
It is also possible in principle to write on the same file the results of different kinds of estimators (USRBDX,
USRBIN, etc.) but this is not recommended, especially in the case of an unformatted file, because it would
make very difficult any reading and analysis.
5. When scoring neutron fluence, and the requested energy bin structure overlaps with that of the low-energy
neutron groups, bin boundaries are forced to coincide with group boundaries and no bin can be smaller than
the corresponding group.
Actually, the program uses the requested energy limits and number of bins to estimate the desired bin width.
The number of bins above the upper limit of the first low-energy neutron group is recalculated according to
such width.
Note that the lowest energy limit of the last neutron group is 10−14 GeV (10−5 eV) for the 260 data set. All
group energy boundaries are listed in Table 10.1 on p. 323.
6. A program Ustsuw is available with the normal Fluka code distribution in directory $FLUPRO/flutil. Ust-
suw reads USRCOLL results in binary form from several runs and allows to compute standard deviations. It
returns differential and cumulative fluence, with the corresponding percent errors, in a file, and differential
fluence in another file formatted for easy plotting. It also returns a binary file that can be read out in turn
by Ustsuw. The content of this file is statistically equivalent to that of the sum of the files used to obtain it,
and it can replace them to be combined with further output files if desired (the Ustsuw program takes care
of giving it the appropriate weight).
7. Setting WHAT(4) = -1 will provide the sum of the collisions in all regions, divided by the value set by the user
for WHAT(5).
8. A collision estimator can only be defined for hadrons or neutrons.
Example:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
USRCOLL -1.0 NEUTRON 23.0 15.0 540.0 350. NeutFlu
USRCOLL 250.0 1.E-14 0.0 0.0 0.0 0. &
* Calculate neutron fluence spectrum in region 15 from thermal energies to
* 250 GeV, in 350 logarithmic energy intervals. Write formatted results on
* unit 23. The volume of region 15 is 540 cm3.
Input Commands 259
7.79 USRGCALL
Calls user-dependent global initialisation.
Notes
Example:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
USRGCALL 789. 321. 18.0 144.0 -27.0 3.14 SPECIAL
* Call global initialisation routine passing over 6 numerical values and a string
260 USRICALL
7.80 USRICALL
Calls user-dependent initialisation.
Notes
Example:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
USRICALL 123. 456. 1.0 -2.0 18.0 18. FLAG12
* Call initialisation routine passing over 6 numerical values and a string
Input Commands 261
7.81 USROCALL
Calls user-dependent output.
Notes
Example:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
USROCALL 17.0 17.0 -5.5 1.1 654.0 321. OK
* Call output routine passing over 6 numerical values and a string
262 USRTRACK
7.82 USRTRACK
Defines a detector for a track-length fluence estimator.
The full definition of the detector may require two successive cards (the second card, identified by the
character “ & ” in any column from 71 to 78 (or in the last field in case of free format input), must be given
unless the corresponding defaults are acceptable to the user)
First card:
SDUM = any character string (not containing “ & ”) identifying the detector (max. 10 characters)
Continuation card:
WHAT(1) = maximum kinetic energy for scoring (GeV)
Default = beam particle total energy as set by the BEAM (p. 71) option (if no BEAM card
is given, the energy corresponding to 200 GeV/c momentum will be used)
WHAT(2) = minimum kinetic energy for scoring (GeV). Note that the lowest energy limit of the last
neutron group is 10−14 GeV (10−5 eV) for the 260 data set.
Default = 0.0 if linear energy binning, 0.001 GeV otherwise
SDUM = “ & ” in any position in column 71 to 78 (or in the last field if free format is used)
Notes
1. IMPORTANT! The results of a USRTRACK track-length estimator are always given as differential distri-
butions of fluence in energy, in units of cm−2 GeV−1 per incident primary unit weight. Thus, for example,
when requesting a fluence energy spectrum, to obtain integral binned results (fluence in cm−2 per energy bin
per primary) one must multiply the value of each energy bin by the width of the bin (even for logarithmic
binning).
2. If the generalised particle is 208.0 (ENERGY) or 211.0 (EM–ENRGY), the quantity scored is differential
energy fluence, expressed in GeV per cm2 per energy unit per primary. That can sometimes lead to confusion
since GeV cm−2 GeV−1 = cm−2 , where energy does not appear. Note that integrating over energy one gets
GeV/cm2 .
3. The maximum number of track-length + collision detectors (see option USRCOLL, p. 257) that the user can
define is 500.
4. The logical output unit for the estimator results (WHAT(3) of the first USRTRACK card) can be any one of
the following:
– the standard output unit 11: estimator results will be written on the same file as the standard Fluka
output.
– a pre-connected unit (via a symbolic link on most UNIX systems, ASSIGN under VMS, or equivalent
commands on other systems)
– a file opened with the Fluka command OPEN
– a file opened with a Fortran OPEN statement in a user-written initialisation routine such as USRINI, USRGLO
or SOURCE (see 13.2.27, 13.2.26, 13.2.19)
– a dynamically opened file, with a default name assigned by the Fortran compiler (typically fort.xx or
ftn.xx, with xx equal to the chosen logical output unit number).
The results of several USRTRACK and USRCOLL detectors in a same Fluka run can be written on the same
file, but of course only if they are all in the same mode (all formatted, or all unformatted).
It is also possible in principle to write on the same file the results of different kinds of estimators (USRBDX,
USRBIN, etc.) but this is not recommended, especially in the case of an unformatted file, because it would
make very difficult any reading and analysis.
5. When scoring neutron fluence, and the requested energy interval structure overlaps with that of the low energy
neutron groups, interval boundaries are forced to coincide with group boundaries and no interval can be smaller
than the corresponding group. Actually, the program uses the requested energy limits and number of intervals
to estimate the desired interval width. The number of intervals above the upper limit of the first low-energy
neutron group is recalculated according to such width. To preserve the requested upper energy limit, the width
of the first interval above the low energy group may be smaller than that of the others.
Note that the lowest energy limit of the last neutron group is 10−14 GeV (10−5 eV) for the 260 data set.
All group energy boundaries are listed in Table 10.1 on p. 323.
6. If the scored fluence is that of a generalised particle which includes neutrons (e.g., ALL-PART, ALL-NEUT,
NUCLEONS, NUC&PI+-, HAD-NEUT, and even ENERGY), the spectrum is presented in two separate tables. One
table refers to all non-neutron particles and to neutrons with energies > 20 MeV. The second table refers only
to neutrons with energy < 20 MeV, and its interval structure is that of the neutron energy groups.
In case an interval crosses 20 MeV, it will include the contribution of neutrons with energy < 20 MeV and not
that of neutrons with energy > 20 MeV.
7. A program Ustsuw is available with the normal Fluka code distribution in directory $FLUPRO/flutil. Ust-
suw reads USRTRACK results in binary form from several runs and allows to compute standard deviations.
It returns differential and cumulative fluence, with the corresponding percent errors, in a file, and differential
fluence in another file formatted for easy plotting. It also returns a binary file that can be read out in turn
by Ustsuw. The content of this file is statistically equivalent to that of the sum of the files used to obtain it,
and it can replace them to be combined with further output files if desired (the Ustsuw program takes care
of giving it the appropriate weight).
8. Setting WHAT(4) = -1 will provide the sum of the track-lengths in all regions, divided by the value set by the
user for WHAT(5).
Example:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
USRTRACK 1.0 PHOTON -24.0 16.0 4500.0 150.PhotFlu
264 USRTRACK
7.83 USRYIELD
Defines a detector to score a double differential particle yield around an
extended or a point target
See also USRBDX
The full definition of the detector may require two successive cards (the second card, identified by the
character “ & ” in any column from 71 to 78 (or in the last field in case of free format input), must be given
unless the corresponding defaults are acceptable to the user)
First card:
For SDUM = anything but BEAMDEF:
WHAT(1) = ie + ia × 100, where ie and ia indicate the two physical quantities with respect to which the
double differential yield is calculated.
If ie > 0, the yield will be analysed in linear intervals with respect to the first quantity; if
ie < 0, the yield distribution will be binned logarithmically.
(Note that for rapidity, pseudo-rapidity and Feynman-x logarithmic intervals are not avail-
able and will be forced to linear if requested).
For the second quantity, indicated by ia , only one interval will be considered.
WHAT(2) > 0.0: number or name of the (generalised) particle type to be scored.
< -80.0 and WHAT(4) = -1.0 and WHAT(5) = -2: the (generalised) particles of type IJ
entering an inelastic hadronic interaction are scored by setting WHAT(2) = -100 -IJ
Default = 201.0 (all particles)
WHAT(4) > 0.0: number or name of the first region defining the boundary (upstream region)
= -1.0 and WHAT(5) = -2.0: the yield of particles emerging from inelastic hadronic inter-
actions is scored
Default = -1.0
WHAT(5) > 0.0: number or name of the second region defining the boundary (downstream region)
= -2.0 and WHAT(4) = -1.0: the yield of particles emerging from inelastic hadronic inter-
actions is scored
Default = -2.0
SDUM = any character string (not containing “ & ”) identifying the yield detector (max. 10 characters)
Continuation card:
WHAT(1) = Upper limit of the scoring interval for the first quantity
Default = beam energy value
WHAT(2) = Lower limit of the scoring interval for the first quantity
Default = 0.0 if linear binning, 1.0 otherwise. Note that these values might not be
meaningful for all available quantities.
WHAT(3) = number of scoring intervals for the first quantity
Default = 50.0
WHAT(6) = ixa + ixm × 100, where ixa indicates the kind of yield or cross section desired and ixm the
target material (if needed in order to calculate a cross section, otherwise ixm = 0). See
Note 4 in case of a thick target
Input Commands 267
d2 σ
ixa = 1: plain double differential cross section , where x1 , x2 are the first and second
dx1 dx2
quantity
d3 σ
ixa = 2: invariant cross section E
dp3
d2 N
ixa = 3: plain double differential yield , where x1 , x2 are the first and second quantity
dx1 dx2
2
d (x2 N )
ixa = 4: double differential yield where x1 , x2 are the first and second quantity
dx1 dx2
2
d (x1 N )
ixa = 5: double differential yield , where x1 , x2 are the first and second quantity
dx1 dx2
1 d2 N
ixa = 6: double differential fluence yield where x1 , x2 are the first and second
cos θ dx1 dx2
quantity, and θ is the angle between the particle direction and the normal to the
surface
d2 (x22 N )
ixa = 7: double differential yield where x1 , x2 are the first and second
dx1 dx2
quantity
d2 (x21 N )
ixa = 8: double differential yield where x1 , x2 are the first and second
dx1 dx2
quantity
1 d2 (x2 N )
ixa = 16: double differential fluence yield where x1 , x2 are the first and second
cos θ dx1 dx2
quantity, and θ is the angle between the particle direction with the normal to the
crossed surface
1 d2 (x1 N )
ixa = 26: double differential fluence yield where x1 , x2 are the first and second
cos θ dx1 dx2
quantity, and θ is the angle between the particle direction with the normal to the
crossed surface
ixm = material number of the target for cross section or LET calculations (default: HYDROGEN)
Default = 0.0 (plain double differential cross section — but see Note 4)
SDUM = “ & ” in any position in column 71 to 78 (or in the last field if free format is used)
WHAT(2) = target particle index, or corresponding name (used by the code to define the c.m.s. frame)
Default = 1.0 (proton)
Notes
1. While option USRBDX (p. 246) calculates angular distributions with respect to the normal to the boundary
at the point of crossing, USRYIELD distributions are calculated with respect to a fixed direction (the beam
direction, or a different direction specified by the user with SDUM = BEAMDEF).
268 USRYIELD
2. When scoring thick-target yields, the angle considered is that between the direction of the particle at the point
where it crosses the target surface and the beam direction (or a different direction specified by the user, see
Note 1). The target surface is defined as the boundary between two regions (positive values of WHAT(4) and
WHAT(5) of the first USRYIELD card).
3. Point-target yields, i.e., yields of particles emerging from inelastic hadronic interactions with single nuclei
(including hadronic interactions by ions and real or virtual photons), are scored by setting WHAT(4) = -1.0
and WHAT(5) = -2.0 in the first USRYIELD card). As an alternative, the corresponding cross sections can
be calculated, depending on the value of WHAT(6). In addition, if WHAT(2) in the same card is < -80.0, the
distributions of particles entering the inelastic hadronic interactions can be scored.
4. Calculating a cross section has little meaning in case of a thick target.
5. Differential yields (or cross sections) are scored over any desired number of intervals for what concerns the first
quantity, but over only one interval for the second quantity. However, the results are always expressed as second
derivatives (or third derivatives in the case of invariant cross sections), and not as interval-integrated yields.
In order to obtain more intervals for the second quantity, the user must define further USRYIELD detectors.
6. In the case of polar angle quantities (|ie | or |ia | = 14, 15, 17, 18, 24, 25) the differential yield is always referred
to solid angle in steradian, although input is specified in radian or degrees.
7. When scoring yields as a function of LET, the intervals will be in keV/(µm g/cm3 ), and the histogram will be
normalized, as usual, to the unit interval of the first and second quantities.
8. A USRYIELD card with SDUM = BEAMDEF, if given, does not refer to a particular detector, but modifies the
reference projectile or target parameters for all USRYIELD detectors of the current run. No continuation card
has to be given after one with SDUM = BEAMDEF.
9. The logical output unit for the estimator results (WHAT(3) of the first USRYIELD card) can be any one of the
following:
– the standard output unit 11: estimator results will be written on the same file as the standard Fluka
output.
– a pre-connected unit (via a symbolic link on most UNIX systems, ASSIGN under VMS, or equivalent
commands on other systems)
– a file opened with the Fluka command OPEN
– a file opened with a Fortran OPEN statement in a user-written initialisation routine such as USRINI, USRGLO
or SOURCE (see 13.2.27, 13.2.26, 13.2.19)
– a dynamically opened file, with a default name assigned by the Fortran compiler (typically fort.xx or
ftn.xx, with xx equal to the chosen logical output unit number).
The results of several USRYIELD detectors in a same Fluka run can be written on the same file, but of course
only if they are all in the same mode (all formatted, or all unformatted).
It is also possible in principle to write on the same file the results of different kinds of estimators (USRBDX,
USRBIN, etc.) but this is not recommended, especially in the case of an unformatted file, because it would
make very difficult any reading and analysis.
10. Not all 37 × 37 combinations of quantities are accepted by the code, nor are they all meaningful (for instance
one could run successfully by setting in the first USRYIELD card WHAT(1) with ia = ie , but the result would
have no physical meaning).
11. When scoring neutron yield with energy as the first quantity, and the requested energy interval structure
overlaps with that of the low energy neutron groups, interval boundaries are forced to coincide with group
boundaries and no interval can be smaller than the corresponding group. Actually, the program uses the
requested energy limits and number of intervals to estimate the desired interval width. The number of intervals
above the upper limit of the first low-energy neutron group is recalculated according to such width. To preserve
the requested upper energy limit, the width of the first interval above the low energy group may be smaller
than that of the others.
Note that the lowest energy limit of the last neutron group is 10−14 GeV (10−5 eV) for the 260 data set.
All group energy boundaries are listed in Table 10.1 on p. 323.
12. If the scored yield with energy as the first quantity is that of a generalised particle which includes neutrons
(e.g., ALL-PART, ALL-NEUT, NUCLEONS, NUC&PI+-, HAD-NEUT, and even ENERGY), the spectrum is presented in two
separate tables. One table refers to all non-neutron particles and to neutrons with energies > 20 MeV. The
second table refers only to neutrons with energy < 20 MeV, and its interval structure is that of the neutron
energy groups.
In case an interval crosses 20 MeV, it will include the contribution of neutrons with energy < 20 MeV and not
that of neutrons with energy > 20 MeV.
Input Commands 269
13. A program Usysuw is available with the normal Fluka code distribution in directory $FLUPRO/flutil. Usy-
suw reads USRYIELD results in binary form from several runs and allows to compute standard deviations. It
returns differential and cumulative fluence, with the corresponding percent errors, in a file, and differential
fluence in another file formatted for easy plotting. It also returns a binary file that can be read out in turn
by Usysuw. The content of this file is statistically equivalent to that of the sum of the files used to obtain it,
and it can replace them to be combined with further output files if desired (the Usysuw program takes care
of giving it the appropriate weight).
14. The maximum number of yield detectors detectors that the user can define is 1000.
7.84 WW–FACTOr
Defines Weight Windows in selected regions
WHAT(1) ≥ 0.0: Russian Roulette (RR) parameter (Window “bottom” weight at the lower energy threshold
set by WW–THRESh).
< 0.0: resets to -1.0 (no RR) a possible positive value set in a previous WW–FACTOr card
This value can be modified by WHAT(4) in option WW–THRESh or by WHAT(2) in
WW–PROFIle, and can be overridden in the user routine UBSSET (p. 370) by assigning a
value to variable WWLOW.
Default = -1.0 (no RR)
WHAT(2) > 1.7*WHAT(1): Splitting parameter (Window “top” weight at the lower energy threshold set by
WW–THRESh)
= 0.0: ignored
≤ 1.7*WHAT(1): resets to ∞ (no splitting) a possible value set in a previous WW-FACTOr card
This value can be modified by WHAT(4) in option WW–THRESh or by WHAT(2) in
WW–PROFIle (p. 273), and can be overridden in the user routine UBSSET (p. 370) by as-
signing a value to variable WWHIG.
Default = ∞ (no splitting)
WHAT(3) > 0.0: Multiplicative factor to be applied to the two energy thresholds for RR/splitting (defined
by option WW-THRESh) in the region of interest
= 0.0: ignored
< 0.0: resets to 1.0 (thresholds not modified) a possible value set in a previous WW–FACTOr card
This value can be overridden in the user routine UBSSET (p. 370) by assigning a value to
variable WWMUL.
Default = 1.0 (RR/splitting thresholds are not modified)
WHAT(4) = lower bound of the region indices (or corresponding name) in which the indicated RR and/or splitting
parameters apply
(“From region WHAT(4). . . ”)
Default = 2.0
WHAT(5) = upper bound of the region indices (or corresponding name) in which the indicated RR and/or
splitting parameters apply
(“. . . to region WHAT(5). . . ”)
Default = WHAT(4)
SDUM = a number from 1.0 to 5.0 in any position, indicating the low-energy neutron weight window profile
to be applied in the regions selected (see WW–PROFIle). Exceptionally, here SDUM must be a
number, in free format, rather than a character string.
= blank, zero or non numerical: ignored
< 0.0: resets to 1.0 a possible value previously given.
This value can be overridden in the user routine UBSSET (p. 370) by assigning a value to variable JWSHPP.
Default (if no WW–PROFIle card is present): profile number 1
Notes
1. Option WW–FACTOr, which must be used together with WW–THRESh (p. 275), allows the user to define a
very detailed weight window for Russian Roulette and splitting: energy-dependent, per region and per particle.
Input Commands 271
WW–THRESh is used to set two basic energy values for each particle (including electrons and photons but
not low-energy neutrons). From each basic couple of energies, a different couple of thresholds is generated for
each region by multiplication with the factor provided in WHAT(3). A weight window of minimum width is
defined at the lower threshold by its bottom and top edges (WHAT(1) and WHAT(2)); a second wider window
is obtained from it at the higher threshold by increasing the “top edge” (splitting level) and decreasing the
“bottom edge” (RR level) by the amplification factor given with WW–THRESh. The whole energy range is
thus divided in three parts. In the high-energy part (above the higher threshold) the window is of infinite
width, i.e., no splitting/RR takes place. In the medium-energy range the window narrows down continuously
with decreasing energy, its top and bottom edges varying linearly with energy between the two thresholds. In
the low-energy range the window width remains constant and equal to the minimum value it has at the lower
threshold.
2. Russian Roulette is played in a given region if the particle weight is lower than the bottom window edge for
that energy, particle and region. The particle survives with a probability equal to the ratio between its weight
and the RR edge, and is given a new weight equal to the RR edge itself.
Splitting is performed if the particle weight is higher than the top window edge for that energy, particle and
region. The particle is replaced by two identical ones with half its weight. Note that the top edge must always
be at least a factor two higher than the bottom one, in order to avoid repeated and useless changes of weight.
Actually, it is suggested to never make this factor less than 3 or 4.
3. For low-energy neutrons, a different scheme applies. Instead of dividing the energy range into three parts
(constant window, continuously varying window, infinite window), the window is assigned group by group by
means of option WW–PROFIle (7.85), creating a so-called “weight-window profile” . On the other hand, it is
not possible to assign a different profile to each region, but only a maximum of 5 different profiles are allowed.
4. A form of splitting and Russian Roulette is also provided by option BIASING (p. 80). The two options, however,
are different in many respects:
– with WW–FACTOr, splitting and RR are played at the moment a particle is taken from the stack and
starts to be transported. With BIASING, splitting/RR happens when a particle crosses a boundary (in
the case of hadrons also — on request — before loading in stack the secondaries from an inelastic hadron
collision)
– while the criterion used by BIASING to trigger splitting/RR depends only on the relative importance of
various regions of phase space, the weight window is based on absolute weight standards pre-assigned to
different phase space regions
– BIASING can have two purposes: when used at collisions with RR only, i.e., reducing factor < 1, it aims
at increasing the total number of histories simulated in a given time, namely to sample over a more
extended part of phase space (e.g., more primary interactions) without leaving any important part not
sufficiently represented. (This is also true of leading particle biasing for electrons and photons via option
EMF–BIAS, p. 108). At the same time (and this holds also for splitting, especially when the option is
used at boundary crossing) it can be applied to sample preferentially from those regions of phase space
which contribute more to the result.
This second purpose is also that of the WW–FACTOr weight window, but in addition this option has the
advantage to avoid excessive weight fluctuations. These can be dangerous in two ways. In general, if
transport is biased and no control is kept on particle weight, it can happen that too much time is wasted
by tracking particles of very low weight which can only contribute little to the score. On the other hand,
too large weights can also be a problem. If the part of phase space used for scoring (the “detector”) is
very small (typically an element of a “binning” mesh), so that only a limited number of particles have a
chance to enter it, it is statistically important that they all make contributions of the same order. A rare
particle of large weight crossing the detector would give rise to an anomalous score not compensated by
opposite fluctuations.
Why should one then use BIASING and not the weight window? The answer is that “tuning” an absolute weight
by region and energy is more powerful but also much more difficult and time-consuming than just quantifying
relative spatial importances. In general, it requires a lot of experience which can often be obtained only by
performing repeated runs of the same case and by making a careful statistical analysis of history distributions
in phase space. Not all problems are worth of it and not all users are able to do it.
5. It can also be said that WW–FACTOr and BIASING (and other non-analogue transport options) are not nec-
essarily mutually exclusive; on the contrary the weight window can be successfully used to damp excessive
weight fluctuations originated by other techniques. However, it is the user’s responsibility to ensure that the
average absolute weights produced independently by the different options be of the same order of magnitude.
Otherwise, possible conflicts could give rise to a waste of time due to an excessive rate of weight adjustments,
and even to incorrect results.
6. The weight limits defined by WW–FACTOr apply to all particles: however, it is possible to set different values
for specific particles (see WHAT(3) of option WW–THRESh). This is especially necessary when secondary
272 WW–FACTOr
particles are generated with a weight much smaller than the parent particles of a different kind (for instance,
as the result of the LAM–BIAS option).
7. WW-FACTOr is one of the two Fluka options where SDUM is used to input numerical data. (Actually, the
material number is first read as a string and then an internal reading is performed on the string to get the
number).
7.85 WW–PROFIle
WHAT(1) = weight window extra factor for the profile defined by WHAT(6), concerning the energy groups
defined by WHAT(3), WHAT(4) and WHAT(5) (both the top and bottom window levels will be
multiplied by WHAT(1)). See Note 2.
= 0.0: ignored
< 0.0: resets to default (extra factor = 1.0)
Default = 1.0 (windows are not modified)
WHAT(3) = lower bound of the group numbers for which the extra factor WHAT(1) or WHAT(2) is re-
quested
(“From group WHAT(3). . . ”)
Default = 1.0 (the group of highest energy)
WHAT(4) = upper bound of the group numbers for which the extra factor WHAT(1) or WHAT(2) is re-
quested
(“. . . to group WHAT(4). . . ”)
Default = WHAT(3)
WHAT(6) = profile number defined by WHAT(1), WHAT(3–5) (up to 5 different profiles are allowed).
Default : profile number 1
Default : (option WW–PROFIle not given): no extra factor is applied to low-energy neutron windows
and importances
Notes
1. Option WW–PROFIle applies only to low-energy neutrons. It is used to refine the basic bias setting defined by
two other options: WW–FACTOr (p. 270) and BIASING (p. 80).
2. WHAT(1) refers to WW–FACTOr: it allows the user to tune the weight window by energy group (WW–FACTOr
does the same by region). The profile defined will be applied to raise or lower the weight window levels
(for low-energy neutrons only) in a group of regions selected by means of WHAT(4–6) and SDUM in option
WW–FACTOr.
3. WHAT(2) refers to BIASING: its aim is to define a reference weight level in each region, which is used by the
program to avoid excessive biasing in some critical cases. If the user has defined a weight window (options
WW–FACTOr and WW–THRESh), the reference weight level is not needed because it is derived directly from
the window parameters. If the user has not defined a weight window but has defined region importances
274 WW–PROFIle
(option BIASING), the reference weight level for a region is assumed in most cases to be the inverse of the
corresponding importance. However, since importance biasing is not based on absolute values of importance
but on importance ratios, in some rare cases the user may give importances which are not equal to the inverse
of the average particle weight, but only proportional to it. (This is in order to better exploit the full importance
range, since for technical reasons in Fluka allowed importance values range only from 0.0001 to 10000.). In
such cases it is possible to multiply all the importances by a factor WHAT(2) only for the purpose of calculating
the reference weight level.
Modification of importances by a factor WHAT(2) apply to all regions (but only for low-energy neutrons). If
neither weight window nor importances have been given, Fluka still calculates a weight reference level from
the ratio of physical to biased non-absorption probability. If a particle’s weight exceeds the reference level in
a given region by more than a factor calculated at run time, non-absorption probability biasing is switched off
and transport continues according to the physical absorption probabilities (analogue transport).
Example 1:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
WW-PROFIle 0.9 0.0 1. 11. 0.0 4.0
WW-PROFIle 0.7 0.0 12. 70. 0.0 4.0
WW-PROFIle 0.5 0.0 71. 260. 0.0 4.0
* Profile n. 4 is defined a multiplication factor for weight windows, where
* the upper and the lower weight limits (as defined by WW-FACTOr and
* WW-THRESh) are multiplied by 0.9 for the first 11 neutron groups, by 0.7
* for groups 12 to 70, and by 0.5 for groups from 71 to 260.
Example 2:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...+...8
WW-PROFIle 0.0 1.8 1. 65. 0.0 2.0
WW-PROFIle 0.0 2.3 66. 260. 0.0 2.0
* Profile n. 2 is defined a multiplication factor for importances, where
* the importances (as defined by BIASING) are multiplied by 1.8 for the first
* 65 neutron groups, and by 2.3 for groups 66 to 260.
Input Commands 275
7.86 WW–THRESh
Defines the energy limits for a Russian Roulette/splitting weight window
and applies particle-dependent modification factors to the windows defined
by WW–FACTOr
WHAT(1) > 0.0: upper kinetic energy threshold in GeV for Russian Roulette (RR)/Splitting with a
weight window. For low-energy neutrons, corresponding (smallest) group number
(included)
= 0.0: ignored
< 0.0: any previously selected threshold is cancelled
WHAT(2) ≥ 0.0 and < WHAT(1): lower kinetic energy threshold in GeV for RR/Splitting with a weight
window. For low-energy neutrons, corresponding (largest) group number (included)
< 0.0 or > WHAT(1): WHAT(2) is set = WHAT(1)
WHAT(3) > 0.0: amplification factor used to define the weight window width at the higher energy
threshold represented by WHAT(1). The weight window at the higher energy threshold
is obtained by multiplying by WHAT(3) the top edge of the window (splitting level)
at the lower threshold, and dividing by the same factor the bottom edge (RR-level)
< 0.0: |WHAT(3)| is used as a multiplication factor for the bottom and top levels of every
region for the particles selected by WHAT(4–6). That is, for such particles both bottom
and top are multiplied by |WHAT(3)|
Default = 10.0 (amplification factor for the splitting and RR-level at the higher threshold).
The particle dependent multiplication factor by default is set = 1.0
WHAT(4) = lower bound of the particle numbers (or corresponding particle name) to which the indicated
weight window energy limits apply. Note that particle number 40.0 indicates low-energy neu-
trons (for this purpose only!). Particle number 8.0 indicates neutrons with energy > 20 MeV
(“From particle WHAT(4). . . ”)
Default = 1.0
WHAT(5) = upper bound of the particle numbers (or corresponding particle name) to which the indicated
weight window energy limits apply
(“. . . to particle WHAT(5). . . ”)
Default = WHAT(4) if WHAT(4) > 0.0, all particles otherwise
Notes
1. Option WW–THRESh is only meaningful when WW-FACTOr (7.84) is also requested. See Note 1 to that option
for more information.
2. For low-energy neutrons, the two energy thresholds are expressed as group numbers, while for all other particles
(including high-energy neutrons) they are expressed in GeV. Therefore, thresholds for low-energy neutrons must
be assigned in a separate WW–THRESh card from other particles.
Combinatorial Geometry
8.1 Introduction
The Combinatorial Geometry (CG) used by Fluka is a modification of the package developed at ORNL for
the neutron and gamma-ray transport program Morse [60] which was based on the original combinatorial
geometry by MAGI (Mathematical Applications Group, Inc.) [99, 121].
The default input format is fixed, and different from that adopted elsewhere in the Fluka code. The
input sequence must be completely contained between a GEOBEGIN and a GEOEND card (see the correspond-
ing description on p. 137, 139).
Two concepts are fundamental in CG: bodies and regions. Originally, Morse bodies were defined
as convex solid bodies (finite portions of space completely delimited by surfaces of first or second degree,
i.e., planes or quadrics). In Fluka, the definition has been extended to include infinite cylinders (circular
and elliptical) and planes (half-spaces). Use of such “infinite bodies” is encouraged since it makes input
preparation and modification much easier and less error-prone. They also provide a more accurate and
faster tracking.
Regions are defined as combinations of bodies obtained by boolean operations: Union, Subtraction and
Intersection. Each region is not necessarily simply connected (it can be made of two or more non contiguous
parts), but must be of homogeneous material composition. Because the ray tracing routines cannot track
across the outermost boundary, all the regions must be contained within a surrounding “blackhole” (an
infinitely absorbing material, in Morse jargon “external void”, designated by the Fluka material number 1),
so that all escaping particles are absorbed. It is suggested to make the external blackhole region rather big,
so as not to interfere with possible future modifications of the problem layout. The external blackhole must
be completely surrounded by the boundary of a closed body, and therefore cannot be defined by means of
half-spaces or infinite cylinders only. Inside such outermost boundary, each point of space must belong to
one and only one region.
Note that in Morse the concept of “region” refers to a portion of space of homogeneous statistical
importance or weight setting, which may extend over one or several “zones” (homogeneous in material
composition). Since the two Morse concepts of region and zone coincide in Fluka (there is a one-to-one
correspondence), the term “region” will be used here to define “a portion of space of uniform material
composition, obtained by boolean operations on one or more subregions”, while “zone” will indicate one of
such subregions, obtained by boolean operations on one or more geometrical bodies.
Repetition of sets of regions according to symmetry transformations is possible in Fluka through the
card LATTICE (p. 297) and through a user-written routine. This allows, for instance, to model in detail only
a single cell of a calorimeter and to replicate it in the entire volume.
GEOBEGIN card (in Fluka standard format, or free format if requested by a FREE or GLOBAL command)
Geometry title (in special format, or in free geometry format if requested by a GLOBAL command)
Body data (in special or free geometry format)
END line (in special or free geometry format)
Region data (in special or free geometry format)
END line (in special or free geometry format)
LATTICE cards (optional, in Fluka standard format, or in free format if requested by a FREE or GLOBAL
command)
Region volumes (optional, see Geometry title line)
277
278 Geometry Title
GEOEND card (in Fluka standard format, or free format if requested by a FREE or GLOBAL command)
This card follows the general Fluka format (see description on p. 49). It can be in free format if the latter
has been requested with option FREE (see p. 134) or GLOBAL (p. 141). The rest of the geometry must be in
a special fixed format described below, unless free geometry format has been requested by GLOBAL.
WHAT(4) > 0.0: logical unit for geometry output. If different from 11, the name of the corresponding
file must be input on the next card if WHAT(3) = 0.0 or 5.0, otherwise on the card
following the next one. Values of WHAT(3) 6= 11.0 and < 21.0 must be avoided
because of possible conflicts with Fluka pre-defined units.
Default = 11.0 (i.e., geometry output is printed on the standard output)
WHAT(5) = ip0 + ip1 × 1000, where ip0 indicates the level of parentheses optimisation and ip1 , if > 0,
forces the geometry optimisation even when there are no parentheses
Default (option GEOBEGIN not given): not allowed! GEOBEGIN and GEOEND must always be present.
Three variables are input in the CG Title line: IVLFLG, IDBG, TITLE. The format is (2I5,10X,A60).
The first integer value (IVLFLG = Input VoLume FLaG) is a flag to indicate how to normalise the quantities
scored in regions by the Fluka option SCORE (p. 226).
IVLFLG = 0 means that no normalisation must be performed (output values are total stars or total
energy deposited in each region).
Body Data 279
IVLFLG = 1 and IVLFLG = 2 are reserved for future use and they have currently no meaning in Fluka.
IVLFLG = 3 means that the scores must be normalised dividing by region volumes input by the
user just before the GEOEND line (see 8.2.9)
The second integer value can be used to modify the format with which body and region data are read:
Note however that the maximum number of regions is dimensioned to 10000 in INCLUDE file (DIMPAR).
Any other IDBG value should be avoided. The value of IDBG is irrelevant if free format has been
requested (see the GLOBAL command, p. 141). If fixed format is not used, the value of IDBG is irrelevant.
The remaining 60 characters can be used for any alphanumeric string at the user’s choice.
The geometry must be specified by establishing two tables. The first table describes the type, size and
location of the bodies used in the geometry description. The second table defines the physical regions in
terms of these bodies.
There are three kinds of possible input formats, two fixed and one free. Free format, if used, implies
necessarily also the use of free format in region input (see 8.2.7).
Fixed format for both body and region input is the default, unless requested differently by a GLOBAL
command at the beginning of the input file. In fixed format, each input body is defined by: its code, a
sequential number, and a set of floating point numerical parameters defining its size, position and orientation
in space (all in cm).
Default fixed format is the original CG one as used in Morse and in other Monte Carlo programs.
It expects up to 6 floating point values per line.
High-accuracy fixed format allows to enter numerical data with full precision (16 significant digits) and
accommodates only a maximum of 3 floating point values per line.
The fixed input format for each body depends on the value of the IDBG variable given in the Geometry
Title line (see 8.2.2 above).
If IDBG = 0, 10 or 100, the body input format is (2X, A3, I5, 6D10.3);
if IDBG = -10 or -100, the format is (2X, A3, I5, 3D22.15);
where the 3-letter code in columns 3-5 is one of the following:
ARB BOX ELL PLA QUA RAW RCC REC RPP SPH
TRC WED XCC XEC XYP XZP YCC YEC YZP ZCC ZEC
(columns 3–5 must be left blank in continuation lines). The integer in columns 6–10 is the body sequential
number (if left blank numbers are assigned automatically, but this is not recommended; it must be left
blank in continuation lines).
The floating-point numbers in columns 11–76 are geometrical quantities defining the body (their number
depends on the body type as explained below, and can extend over several continuation lines). The presence
of the decimal point in the numerical data is compulsory.
280 Rectangular Parallelepiped
Free format is used for both body and region input only if requested by a GLOBAL command (see p. 141)
at the beginning of the input file or by the string COMBNAME in the SDUM field of the GEOBEGIN
command.
In free format, each body is defined by: its code, its identifier (an alphanumeric string of up to 8
characters, with the first character alphabetical) and a set of numerical parameters defining its size, position
and orientation in space (all in cm).
Free format has been introduced only recently and is expected to supersede soon the other formats,
which will be kept however for reasons of back compatibility. Its main advantages, in addition to the freedom
from strict alignment rules, are the possibility to modify the input sequence without affecting the region
description (for instance, by inserting a new body) and the availability of parentheses to perform complex
boolean operations in the description of regions.
The input for each body consists of a 3-letter code indicating the body type
ARB BOX ELL PLA QUA RAW RCC REC RPP SPH
TRC WED XCC XEC XYP XZP YCC YEC YZP ZCC ZEC
followed by a unique “body name” (alphanumeric identifier) and a set of geometrical quantities defining the
body (their number depends on the body type as explained below). The different items, separated by one
or more blanks, or by one of the separators , / ; : can extend over as many lines as needed, each line
having a maximum length of 132 characters. See option FREE (p. 134) for more detailed instructions on the
use of separators.
After the last body description, end with a line having the code END.
With all input formats, a line having an asterisk (*) in column 1 is treated as a comment line. Such
comment lines can be inserted freely at any point of Body and Region input, allowing easier identification.
An RPP (Fig. 8.1) has its edges parallel to the coordinate axes.
It is defined by 6 numbers in the following order: Xmin , Xmax , Ymin , Ymax , Zmin , Zmax (minimum and
maximum coordinates which bound the parallelepiped).
Warning! Of course Xmin must be < Xmax , Ymin < Ymax and Zmin < Zmax . If this condition is not
satisfied, the body is ignored.
An RPP definition extends over one single line in default fixed format, or over two lines in high-accuracy
body fixed format (IDBG = -10 or -100 in the CG Title line, see 8.2.2).
Example (the comment lines shown are allowed input lines):
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7..
RPP 4 -20.0 +20.0 -50.0 +50.0 -38.5 +38.5
* (a parallelepiped centred on the origin)
➤
Xmax
Zmin Zmax
➤ Z
Ymin
Xmin
Ymax
➤
A BOX (Fig. 8.2) is also a Rectangular Parallelepiped, but with arbitrary orientation in space. Its use is
generally not recommended, since it can be replaced by a suitable combination of infinite planes (PLA, XYP,
XZP, YZP). Planes are easier to define, make tracking more accurate and often only a few are needed in a
region description.
(1) (1)
A BOX is defined by 12 numbers in the following order: Vx , Vy , Vz (coordinates of a vertex), Hx , Hy ,
(1) (2) (2) (2) (3) (3) (3)
Hz , Hx , Hy , Hz , Hx , Hy , Hz (x-, y- and z- components of three mutually perpendicular vectors
representing the height, width and length of the box). Note that it is the user’s responsibility to ensure
perpendicularity. This is best attained if the user has chosen high-accuracy input fixed format (IDBG = -10
or -100 in the CG Title line, see 8.2.2), or free format, and the value of each vector component is expressed
with the largest available number of significant digits.
A BOX definition extends over 2 lines in default fixed format, or over 4 lines in high-accuracy body fixed
format.
Example in default fixed format:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7..
BOX 18 0.0 0.0 0.0 7.0710678 7.0710678 0.0
-14.142136 14.142136 0.0 0.0 0.0 30.0
* (a parallelepiped with a corner on the origin, with edges 10, 20 and
* 30 cm long, rotated counterclockwise by 45 degrees in the x-y plane)
→
H (3)
➤
→
H (2)
• → ➤
Vx, Vy, Vz H (1)
An RCC (Fig. 8.4) can have any orientation in space. It is limited by a cylindrical surface and by two plane
faces perpendicular to its axis. (If the cylinder axis is parallel to one of the coordinate axes, it is worth
considering instead the use of an infinite cylinder XCC, YCC or ZCC (see below), leading to increased tracking
speed).
Each RCC is defined by 7 numbers: Vx , Vy , Vz (coordinates of the centre of one of the circular plane faces),
Hx , Hy , Hz (x-, y- and z- components of a vector corresponding to the cylinder height, pointing to the other
plane face), R (cylinder radius).
An RCC definition extends over 2 lines in default fixed format, or over 3 lines in high-accuracy body fixed
format.
Example in default fixed format:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7..
RCC 07 5.0 5.0 5.0 57.735027 57.735027 57.735027
37.
* (a circular cylinder 100 cm long and of 37 cm radius, with base
* centred at point x=5, y=5, z=5, its axis making equal angles to
* the three coordinate axes).
Circular and Elliptical Cylinder 283
Vx, Vy, Vz
•
R
A REC (Fig. 8.5) can have any orientation in space. It is limited by a cylindrical elliptical surface and by
two plane faces perpendicular to its axis. (If the cylinder axis is parallel to one of the coordinate axes, it is
worth considering instead the use of an infinite cylinder XEC, YEC or ZEC (see below), leading to increased
tracking speed).
Each REC is defined by 12 numbers: Vx , Vy , Vz (coordinates of the centre of one of the elliptical plane faces),
Hx , Hy , Hz (x-, y- and z- components of a vector corresponding to cylinder height, pointing to other plane
(1) (1) (1)
face), Rx , Ry , Rz (components of a vector corresponding to the minor half-axis of the cylinder elliptical
(2) (2) (2)
base), Rx , Ry , Rz (ditto for the major half-axis).
A REC definition extends over 2 lines in default fixed format, or over 4 lines in high-accuracy body fixed
format.
Example in default fixed format:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7..
REC 1 -10.0 12.0 7.0 0.0 58.0 0.0
9. 0.0 0.0 0.0 0.0 17.0
* (an elliptical cylinder 58 cm long parallel to the y axis, with minor
* half-axis 9 cm long parallel to the x coordinate axis, major
* half-axis 17 cm long parallel to the z axis, base centred at point
* x=-10, y=12, z=7)
R R (1) ➤
➤
➤
→
R (2)
→ →
➤
H H
•
Vx, Vy, Vz
•
Vx, Vy, Vz
Fig. 8.4: Right Circular Cylinder (RCC) Fig. 8.5: Right Elliptical Cylinder (REC)
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+.
REC 1 -10.0 12.0 7.0
0.0 58.0 0.0
9. 0.0 0.0
0.0 0.0 17.0
A TRC (Fig. 8.6) can have any orientation in space. It is bounded by a conical surface and by two circular
plane faces perpendicular to the axis of the cone.
Each TRC is defined by 8 numbers: Vx , Vy , Vz , (coordinates of the centre of the major circular base), Hx , Hy ,
Hz (components of a vector corresponding to the TRC height, directed from the major to the minor base),
R(1) (radius of the major base), R(2) (radius of the minor base)
A TRC definition extends over 2 lines in default fixed format, or over 3 lines in high-accuracy body fixed
format.
Example in default fixed format:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7..
TRC 102 0.0 0.0 -130.0 0.0 0.0 1000.0
600. 150.0
* (a truncated cone 1000 cm long parallel to the z axis, with the
* larger circular base 600 cm in radius located at z = -130 and the
* smaller base 150 cm in radius located at z = 870)
TRC NewCone 0.0 0.0 -130.0 0.0 0.0 1000.0 600. 150.0
R(2)
➤
→
H
R (1)
•
Vx, Vy, Vz
An ELL (Fig. 8.7) is a prolate (cigar-shaped) ellipsoid, obtainable by revolution of an ellipse around its major
axis, and having any orientation in space.
(1) (1) (1) (2) (2) (2)
Each ELL is defined by 7 numbers: Fx , Fy , Fz , Fx , Fy , Fz , (coordinates of the two foci on the
major ellipsoid axis), L (full length of the major axis).
An ELL definition extends over 2 lines in default fixed format, or over 3 lines in high-accuracy body fixed
format.
Example in default fixed format:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7..
ELL 003 -400.0 0.0 0.0 400.0 0.0 0.0
1000.
* (an ellipsoid obtained by revolution around the x axis of an ellipse
* centred at the origin, with major axis parallel to x 1000 cm long and
* minor axis 600 cm long).
A WED (Fig. 8.8) is the half of a BOX (see), cut by a plane passing through its centre and through four
corners. Its use, like that of the BOX, is now mostly superseded by the availability of infinite planes (XYP,
XZP, YZP and PLA).
286 Arbitrary Polyhedron
A WED is defined by 12 numbers: Vx , Vy , Vz (coordinates of one of rectangular corners), Hx1 , Hy1 , Hz1 , Hx2 ,
Hy2 , Hz2 , Hx3 , Hy3 , Hz3 (x-, y- and z- components of three mutually perpendicular vectors corresponding to the
height, width and length of the wedge). Note that it is the user’s responsibility to ensure perpendicularity.
This is best attained if the user has chosen high-accuracy input format (IDBG = -10 or -100 in the CG
Title line, see 8.2.2), or free format, and the value of each vector component is expressed with the largest
available number of significant digits.
The face defined by vectors 1 and 3 and that defined by vectors 2 and 3 are rectangular; the two faces defined
by vectors 1 and 2 are triangular; the fifth face is rectangular with two edges parallel to vector 3 and two
edges parallel to the hypotenuse of the triangle made by vectors 1 and 2.
A WED definition extends over 2 lines in default fixed format, or over 4 lines in high-accuracy body fixed
format.
Example in default fixed format:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7..
WED 97 0.0 0.0 0.0 7.0710678 7.0710678 0.0
-14.142136 14.142136 0.0 0.0 0.0 30.0
* (the bottom half of a parallelepiped with a corner on the origin,
* with edges 10, 20 and 30 cm long, rotated counterclockwise by 45
* degrees in the x-y plane)
An ARB (Fig. 8.9) is a portion of space bounded by 4, 5 or 6 plane faces. Its use is rather complicated and is
now superseded by the availability of infinite planes (XYP, XZP, YZP and PLA). For completeness, however,
a description of input will be reported here.
Assign an index (1 to 8) to each vertex. For each vertex, give the x, y, z coordinates on 4 lines (8 lines in
high-accuracy body fixed format), with 6 numbers on each line (3 numbers per line in high-accuracy format):
Arbitrary Polyhedron 287
→
H (3)
➤
→
H (2)
• → ➤
Vx, Vy, Vz H (1)
8.2.4.10 Infinite half-space delimited by a coordinate plane. Code: XYP, XZP, YZP
There are 4 kinds of infinite half-spaces. Three of them are delimited by planes perpendicular to the
coordinate axes:
x
➤
→
➤
H
Vx, Vy, Vz
•
z
• ➤
Vz
Fig. 8.10: Infinite half-space delimited by a plane per- Fig. 8.11: Infinite half-space delimited by a generic
pendicular to the z axis (XYP) plane (PLA)
Each PLA (Fig. 8.11) is defined by 6 numbers: Hx , Hy , Hz (x-, y- and z- components of a vector of arbitrary
length perpendicular to the plane), Vx , Vy , Vz (coordinates of any point lying on the plane).
The half-space “inside the body” is that from which the vector is pointing (i.e., the vector points “outside”).
A PLA definition extends over a single line in default fixed format, and over two lines in high-accuracy body
fixed format.
Example in default fixed format:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7..
PLA 74 0.0 1.0 1.0 200.0 -300.0 240.0
* (all points "below" a plane at 45 degrees in the y-z projection which
* passes through the point x=200, y=-300, z=240 - note that for such a
* plane the x value of the point is completely irrelevant - )
290 Infinite Cylinders
8.2.4.12 Infinite Circular Cylinder parallel to a coordinate axis. Code: XCC, YCC, ZCC
A XCC (YCC, ZCC) is an infinite circular cylinder parallel to the x (y, z) axis.
Each XCC (YCC, ZCC) (Fig. 8.12) is defined by 3 numbers: Ay , Az (Az , Ax for YCC, Ax , Ay for ZCC)
(coordinates of the cylinder axis), R (cylinder radius)
An XCC (YCC, ZCC) definition, in fixed format, extends always over one single line.
Example in default fixed format:
X
➤
Ax
R
•
Az Z
➤
➤
Y
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7..
XCC 013 -480.0 25.0 300.0
* (an infinite cylinder of radius 300 cm, with axis defined by y=-480,
* z=25)
ZCC 014 0.0 0.0 2.5
* (an infinite cylinder of radius 2.5 cm, with axis equal to the z axis)
8.2.4.13 Infinite Elliptical Cylinder parallel to a coordinate axis. Code: XEC, YEC, ZEC
A XEC (YEC, ZEC) is an infinite elliptical cylinder parallel to the x (y, z) axis, and with the axes of the
ellipse parallel to the other two coordinate axes.
Generic Quadric 291
Each XEC (YEC, ZEC) is defined by 4 numbers: Ay , Az , (Az , Ax for YEC, Ax , Ay for ZEC) (coordinates of the
cylinder axis), Ly , Lz (Lz , Lx for YEC, Lx , Ly for ZEC) (semiaxes of the ellipse).
A XEC (YEC, ZEC) definition extends over one single line in default fixed format, and over two lines in high-
accuracy body fixed format.
Example in default fixed format:
➤
Lz
Ly
Az Z
➤
Ay
➤
Y
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7..
ZEC 101 15.0 319.0 33.0 80.0
* (an infinite elliptical cylinder, centred on x=15, y=319, with the
* ellipse minor semi-axis parallel to x, 33 cm long, and the major
* semi-axis parallel to y, 80 cm long)
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+.
QUA 27 0.0025 0.04 0.0064
0.0 0.0 0.0
0.055 0.64 0.0256
1.8881
QUA Elipsoid 0.0025 0.04 0.0064 0.0 0.0 0.0 0.055 0.64 0.0256 1.8881
NOTE
The body definitions which are contained between the lines $Start_xxx and $End_xxx are automat-
ically modified according to the syntax described below.
8.2.5.1 Expansion
This directive provides a coordinate expansion (or reduction) of the body dimensions by a defined scaling
factor, for all bodies embedded between the two lines.
Example:
$Start_expansion 10.
QUA Elipsoid 0.0025 0.04 0.0064 0.0 0.0 0.0 0.055 0.64 0.0256 1.8881
$End_expansion
transforms an ellipsoid centred at (-11, -8, -2), with semiaxes 20, 5 and 12.5 cm long parallel to the coordinate
axes, to one centred at (110, 80, -20) with semiaxes 200, 50 and 125 cm long.
Translation 293
8.2.5.2 Translation
This directive provides a coordinate translation. The bodies embedded between th e two lines are translated
by [dX] [dY] [dZ] on the three axes.
Example:
$Start_translat -5., -7., +9.
QUA Elipsoid 0.0025 0.04 0.0064 0.0 0.0 0.0 0.055 0.64 0.0256 1.8881
$End_translat
transforms an ellipsoid centred at (-11, -8, -2), with semiaxes 20, 5 and 12.5 cm long parallel to the coordinate
axes, to an identical one centred at (6, 1, 7)
This directive provides a coordinate transformation, predefined by a ROT–DEFIni card, for all bodies embed-
ded between the two lines.
Example 1:
....
* Cylindrical target is transformed with transformation "Rotdefi1"
$Start_transform Rotdefi1
RCC targRepl 0.0 0.0 -5.0 0.0 0.0 10.0 5.0
$End_transform
....
* ROT-DEFI transformations shift of (0,-2,-30) then rotation of -21 degrees
* around the x axis
ROT-DEFI 0.0 -2.0 -30.0Rotdefi1
ROT-DEFI 100. -21. Rotdefi1
Example 3:
ROT-DEFIni 3. -90. 0. 0. 0. 0. XtoZ
........................................
$Start_transform XtoZ
$Start_translat 0. -30. +20.
$Start_expansion 10.
QUA Elipsoid 0.0025 0.04 0.0064 0.0 0.0 0.0 0.055 0.64 0.0256 1.8881
$End_expansion
$End_translat
$End_transform
transforms an ellipsoid centred at (-11, -8, -2), with semiaxes 20, 5 and
12.5 cm long parallel to the coordinate axes, to a similar one with
semiaxes 200, 50 and 125 cm, centred at (11, -22, 18) and rotated by 90
degrees (axis z becomes axis x). Note that $Start_expansion takes
precedence over $Start_translat, which in turn takes precedence over
$Start_transform.
$Start_expansion takes precedence over $Start_translat, which in turn takes precedence over
$Start_transform.
Directives $Start_expansion and $Start_translat are applied when reading the geometry: therefore they
imply no CPU penalty. Directive $Start_transform, instead, is applied at run-time and requires some
additional CPU time.
Directives can be nested. All the directives can be used in association with lattices.
The ROT–DEFIni cards are used by the geometry directive to transform the coordinates of the bodies
(not of the particles!). If a “-” sign is placed in front of the [ROT-DEFIni name or number] of the
$Start_transform directive the inverse transformation is used instead.
Body definitions must be terminated by a line with the string END (in column 3–5 if fixed format is used).
The various regions are described in terms of differences, intersections and unions of bodies. As in the case
of body description, the user has the choice between free format and two kinds of fixed format. One of the
latter is the traditional format used by the original Combinatorial Geometry as implemented for example in
Morse. The other fixed format is similar to it, but has been extended to allow body numbers larger than
10000. Both fixed formats are now superseded by the more convenient free region input format, recently
introduced. Free format is based on body mnemonic “names” instead of sequential numerical identifiers and
allows the use of parentheses to perform more complex boolean operations. However, the two fixed formats
are still available for back compatibility reasons. With any input format, a line having an asterisk (*) in
column 1 is treated as a comment card.
Each region is described as a combination of one or more bodies, by means of the three operator symbols:
- + and OR
Meaning of the operators 295
referring respectively to the boolean operations of subtraction (or complement), intersection and union.
Each body is referred to by its sequential number in the body description table (see 8.2.3).
Input for each region extends on one or more lines, in a format which depends on the value of IDBG on the
Geometry Title line (see 8.2.2).
If IDBG = 0, 10 or -10, region input format is (2X,A3,I5,9(A2,I5));
if IDBG =100 or -100, region input format is (2X,A3,I5,7(A2,I7));
where the 3 characters in columns 3–5 are:
– on the first input line of a given region, an arbitrary non-blank string chosen by the user (it can be
used, together with optional comment lines, to help identifying the region or the region material).
Note that regions are identified in the code by an integer number corresponding to the order of their
input definition: therefore it can be useful (but not mandatory) to have that number appearing in the
string. For instance, if the 5th region is air, it could be labelled AI5.
– on all continuation lines, columns 3-5 must be blank.
– is the number of regions which can be entered by a particle leaving any of the bodies defined for the
region being described (leave blank in continuation lines). The NAZ number is used to allocate memory
for this so-called “contiguity list”, and it is not essential that it be exact (if left blank, it is set to 5).
Any number is accepted, but if the final sum of these integers is close to the actual sum, tracking speed
can be slightly improved.
in columns 11-73:
– alternate as many 2-character fields (’OR’ or blank) and integer fields (body numbers preceded by + or -
sign), as are needed to complete the description of that region (see below for an explanation of symbol
meaning). If one line is not sufficient, any number of continuation lines can be added (identified by a
blank field in column 3–5).
After the last region description, end with a line having the code END in columns 3–5.
If a body number appears in a zone description preceded by a + operator, it means that the zone being
described is wholly contained inside the body.
If a body number appears in a zone description preceded by a - operator, it means that the zone being
described is wholly outside the body.
Obviously, in the description of each region the symbol + must appear at least once.
Examples:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7..
BA1 4 +7 +3
* (the above region is the part of space common to body 7 and 3)
MU2 7 +3 -4 -7 +20
* (the above region is the part of space common to body 3 and 20,
* excluding however that which is inside body 4 and that which is
* inside 7)
AIR 5 +19
* (the latter region coincides entirely with body 19)
In some instances a region may be described in terms of subregions, lumped together. The OR operator is
used as a boolean union operator in order to combine subregions (partially overlapping or not). Subregions
(also called “zones” in this manual) are formed as intersections or differences as explained above, and then
the region is formed by a union of these subregions. When OR operators are used there are always two or
more of them, and they refer to all body numbers following them until the next OR or the end of the region
description, each body being preceded by its + or - sign.
Examples:
296 Free format input
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7...
SA7 11OR +4 +6 -7 -8OR +6 +3 -21
* <---- first subregion -----><- second subregion ->
G18 2OR +9OR +15OR +1OR +8 -2OR +8 -3OR +8 +18
* < 1st >< 2nd >< 3rd ><--- 4th ----><--- 5th ----><--- 6th ---->
OR +12 -10 -11 -13 -14
*< blank ><---- 7th and last subregion -----> (continuation line)
Each region is described as a combination of one or more bodies, by means of the three operator symbols:
- + and |
referring respectively to the boolean operations of subtraction (or complement), intersection and union.
Each body is referred to by its “name” (an alphanumeric string of up to 8 characters, the first character
being alphabetical) in the body description table (see the description of free format body input in 8.2.3.2).
Input for each region starts on a new line and extends on as many continuation lines as
are needed, each line having a maximum length of 132 characters. It is of the form:
REGNAME NAZ boolean-zone-expression
or REGNAME NAZ | boolean-zone-expression | boolean zone expression | . . .
where REGNAME, NAZ and the remaining part are separated by one or more blanks.
– REGNAME is the region “name” (an arbitrary unique alphanumeric character string chosen by the user).
The region name must begin by an alphabetical character and must not be longer than 8 characters.
– NAZ is an integer indicating (approximately) the number of regions which can be entered by a particle
leaving any of the bodies appearing in the region description that follows. The NAZ number is used
to allocate memory for this so-called “contiguity list”, and it is not essential that it be exact. Any
number is accepted, but if the final sum of these integers is close to the actual sum, tracking speed can
be slightly improved. In free format input, NAZ may not be left blank.
– “boolean-zone-expression” is a sequence of one or more body names preceded by the operators +
(intersection) or - (complement or subtraction). A zone expression can be contained inside one or more
sets of left and right parentheses. Several zone expressions can be combined by the union operator |
(corresponding to OR in fixed format input).
When | operators are used there are always two or more of them, and they refer to all bodies following
them until the next | or the end of the region description, each body being preceded by its + or - sign.
In evaluating the expressions, the highest operator precedence is given to parentheses (the most inner
ones first), followed by the | operator. In each zone expression, at least one body name preceded by +
must be present. If one line is not sufficient, any number of continuation lines can be added. Blanks
are ignored.
Region description ends with a line containing the single string END.
– If a body name is preceded by a + operator in an expression describing a zone (or a zone component
surrounded by parentheses) it means that the zone or zone component being described is wholly
contained inside the body (boolean intersection).
– If a body name is preceded by a - operator in an expression describing a zone (or a zone component
surrounded by parentheses) it means that the zone or zone component being described is wholly outside
the body (boolean complement).
Obviously, in the description of each region the symbol + must appear at least once. The same is true for each
zone (subregion delimited by | operators) and for each zone component (subzone delimited by parentheses)
H2Osphere 5 +marble
* Region "H2Osphere" coincides entirely with body "marble"
Examples of regions consisting of the union of several zones, possibly (but not necessarily) partially overlap-
ping:
Corners 6 | +dice +topNorth | +dice +topEast | +dice +topSouth |
+dice +topWest | +dice +botNorth | +dice +botEast | +dice +botSouth | +dice
+botWest
* Region "Corners" is made of the 8 corners of a cube, each of which is
* obtained by the intersection of a cubic body "dice" and a tilted plane
* described by vector pointing to the centre of the cube
Region data must be terminated by a line with the string END (in column 3–5 if format is fixed).
This is an optional set of cards which must be input if (and only if) flag IVLFLG in the CG Title line has
been given a value 3 (see 8.2.2). As many volume definition cards must be given as is needed to input a
volume for every region. The input variable is an array VNOR(I) = volume of the Ith region. The format is
(7E10.5). Volume data are used by Fluka only to normalise energy or star densities per region, requested
by the SCORE command (p. 226).
This is an optional card for modular geometries. Its use needs some more effort and preparation.
– The basic unit of the geometry, composed by an arbitrary number of regions, must be described in full
detail in the body and region data.
298 LATTICE card
– Additional body and region data must also be provided to describe “container” regions representing
the “boxes”, or lattice cells, wherein the basic unit has to be replicated. No material assignment is
needed for these lattice-cell regions.
– A roto-translation must be defined (option ROT–DEFI, (p. 221) and associated with each lattice to
provide the transformation bringing from any point and direction inside each lattice cell to the corre-
sponding point and direction in the basic unit. Alternatively, a user routine (LATTIC, see 13.2.10) can
be written for the same purpose.
– The LATTICE card itself identifies the lattice cells and establishes the correspondence between region
number and lattice cell number, where the region number is the sequential number in the region table,
and the lattice cell number is that used in the tracking to address the transformation routine, and is
chosen by the user. Contiguous numbering is recommended for memory management reasons, but is
not mandatory. Non-contiguous numbering can be done using several LATTICE cards.
WHAT(4) = lattice number of the first lattice cell (or corresponding name), assigned to region WHAT(1)
Default : No default
WHAT(5) = lattice number of the last lattice cell (or corresponding name), assigned to region WHAT(2)
Default : No default
A single geometry can be a mixture of modular areas, described by lattices, and “normal” areas,
described by standard regions. Many different LATTICE cards may be issued in the same geometry, when
different symmetries are present in different areas. In principle, any analytical symmetry transformation can
be implemented (rotation, translation, reflection, or combination of these).
Care must be taken to ensure that any region in the basic unit is fully contained (after coordinate trans-
formation) in any lattice cell belonging to its symmetry transformation. Regions falling across two different
lattice cells would lead to unpredictable behaviour.
The basic unit does not need necessarily to describe a “real” geometry region, but can as well be used only
as a prototype to reproduce in any number of “copies”.
NOTE: The lattice cell regions do not need to be included in the other input option cards. Materials,
thresholds, etc., must be assigned only to the regions contained in the basic unit. Of course, this implies
that all copies of a same basic unit share the same material, setting and biasing properties.
GEOEND card 299
IMPORTANT: If the geometry is being described in free format, using alphanumeric names as body and
region identifiers, names must be used also in the LATTICE card(s) for both regions and lattices.
A card with the string GEOEND in column 1–6 must terminate the combinatorial geometry input (see p. 139).
The GEOEND card can be used also to activate the geometry debugger, using the WHAT and SDUM
parameters. In this case, a second GEOEND card (continuation) may be necessary. It is recommended that
a STOP card should follow immediately, avoiding to start transport when debugging is completed.
WHAT(1) = Number of mesh intervals in the X-direction between Xmin and Xmax
Default = 20.0
WHAT(2) = Number of mesh intervals in the Y-direction between Ymin and Ymax
Default = 20.0
WHAT(3) = Number of mesh intervals in the Z-direction between Zmin and Zmax
Default = 20.0
SDUM = “ & ” in any position in column 71 to 78 (or in the last field if free format is used)
Default (option GEOEND not given): not allowed! GEOBEGIN and GEOEND must always be present.
See the Notes to GEOEND option (7.31) for more details and instructions.
1. Assign an organ to each voxel. Each organ is identified by a unique integer ≤ 32767. The numbering
does not need to be contiguous, i.e., gaps in the numbering sequence are allowed. One of the organs
must have number 0 and plays the role of the medium surrounding the voxels (usually vacuum or air).
The assignment is done via a special file where the organ corresponding to each voxel is listed sequen-
tially in Fortran list-oriented format, with the x coordinate running faster than y, and y running faster
than z. In practice the file is always written by a program similar to the one reported below. The user
will need to modify the values of the parameters DX, DY DZ, NX, NY, NZ (respectively voxel size and
number of voxels for each coordinate), and possibly some other more trivial things (file names, title,
reading from the original CT scan file).
The following program takes also care of recompacting the original organ numbers by eliminating all
gaps in the sequence, and writes a translation table to the screen:
WRITE(*,’(A,2I10)’)’ New number, old number: ’, NO, IC
After having modified the program (assumed to be in a file writegolem.f), compile it:
$FLUPRO/flutil/fff writegolem.f
link it with the FLUKA library:
$FLUPRO/flutil/lfluka -o writegolem writegolem.o
execute it:
./writegolem
The result will be a file golem.vxl (or equivalent name chosen by the user) which will be referred to
by a special command line in the geometry input (see 2 below).
PROGRAM WRITEGOLEM
INCLUDE ’(DBLPRC)’
INCLUDE ’(DIMPAR)’
INCLUDE ’(IOUNIT)’
* COLUMNS: FROM LEFT TO RIGHT
* ROWS: FROM BACK TO FRONT
* SLICES: FROM TOP TO BOTTOM
PARAMETER ( DX = 0.208D+00 )
PARAMETER ( DY = 0.208D+00 )
PARAMETER ( DZ =-0.8D+00 )
PARAMETER ( NX = 256 )
PARAMETER ( NY = 256 )
PARAMETER ( NZ = 220 )
DIMENSION GOLEM(NX,NY,NZ)
INTEGER*2 GOLEM
CHARACTER TITLE*80
DIMENSION IREG(1000), KREG(1000)
INTEGER*2 IREG, KREG
*
CALL CMSPPR
DO IC = 1, 1000
KREG(IC) = 0
END DO
OPEN(UNIT=30,FILE=’ascii_segm_golem’,STATUS=’OLD’)
READ(30,*) GOLEM
NO=0
MO=0
DO IZ=1,NZ
DO IY=1,NY
DO IX=1,NX
IF (GOLEM(IX,IY,IZ) .GT. 0) THEN
IC = GOLEM(IX,IY,IZ)
MO = MAX (MO,IC)
DO IR=1,NO
voxel geometry 301
Starting from Fluka2011.2b, the voxel files can contain an arbitrary number of extra records of 80
characters each, which are read and interpreted as ordinary input cards. This allows to embed in the
voxel files informations such as material definitions, material assignments, correction factor etc, which
are often generated by automatic programs out of a CT scan. Flair contains tools for reading CT scans
in Dicom format, and automatically generate a voxel file containing the material and correction factor
informations according to a Hounsfield number to material/density translation algorithm which can
be tuned by the user.
2. Prepare the usual Fluka input file. The geometry must be written like a normal Combinatorial
Geometry input (in any of the allowed formats, as part of the normal input stream or in a separate
file), but in addition must include:
– A VOXELS card as a first line, before the Geometry title card (8.2.2), with the following informa-
tion:
WHAT(1), WHAT(2), WHAT(3) = x, y, z coordinates chosen as the origin of the “voxel volume”,
i.e., of a region made of a single RPP body (8.2.4.1) which contains all the voxels
WHAT(4) = index (or name) of the ROT–DEFIni card for an eventual roto/translation of the
VOXELs
WHAT(5), WHAT(6): not used
SDUM = name of the voxel file (extension will be assumed to be .vxl)
– The usual list of NB bodies, not including the RPP corresponding to the “voxel volume” (see VOXELS
card above). This RPP will be generated and added automatically by the code as the (NB+1)th
body, with one corner in the point indicated in the VOXELS card, and dimensions NX*DX, NY*DY
and NZ*DZ as read from the voxel file.
– The usual region list of NR regions, with the space occupied by body NB+1 (the “voxel volume”)
subtracted. In other words, the NR regions listed must cover the whole available space, except
the space corresponding to the “voxel volume”. This is easily obtained by subtracting body NB+1
in the relevant region definitions, even though this body is not explicitly input at the end of the
body list. The code will automatically generate and add several regions:
Name Number Description
VOXEL NR+1 this is a sort of ”cage” for all the voxels. Nothing
(energy etc.) should ever be deposited in it: the user
shall assign VACUUM to it
VOXEL001 NR+2 containing all voxels belonging to organ number 0.
There must be at least 2 of such voxels, but in general
302 Flair Geometry Editor
The assignment of materials shall be made by command ASSIGNMAt (p. 66) (and in a similar way other
region-dependent options) referring to the first NR regions in the usual way, and to the additional regions
using the correspondence to organs as explained above.
Main features:
Output
The output of Fluka consists of:
– a main (standard) output, written on logical output unit LUNOUT (pre-defined as 11 by default)
– a scratch file, of little interest to the user, written on output unit LUNGEO (16 by default). However, if
the rfluka script is used to run Fluka, this file is automatically deleted by the script at the end of
the run
– a file with the last random number seeds, unit LUNRAN (2 by default)
– a file of error messages (if any), unit LUNERR (15 by default)
– any number (including zero) of estimator output files. Their corresponding logical unit number is
defined by the user: in case the number chosen coincides with one of the above, in particular LUNOUT,
estimator formatted output will appear as part of the corresponding output stream. However, this
is not recommended, and it is not allowed anyway in the case of unformatted output. Generally, the
user can choose between formatted and unformatted output. Only formatted output is presented here,
while unformatted output is described at the end of each option description
– possible additional output generated by the user in any user routine, in particular USROUT (see 13.2.29,
p. 373)
– A banner page
– The FLUKA license
– A header with the Fluka version and the time when the output was printed
– A straight echo of the input cards.
Each input line is echoed in output, but not character by character. The input WHATs and SDUMs
are read, and then written with a different format. Any alignment error shows up as a number or a
character string different from the one intended: therefore in case of problems checking this part of the
output is more effective than checking the input itself.
Comments are reproduced in output, with the exception of in-line comments preceded by an exclama-
tion mark (!)
– Geometry output (if not redirected to a separate file, see GEOBEGIN, Note 1). The geometry output
(which is part of the standard output by default, but can be re-directed to a separate file) begins with
an echo of the geometry title and the value of the two input variables IVLFLG (Input VoLume FLaG) and
IDBG (in the original CG a debugging flag, but now used to select various format lengths). Then there
in an echo of the body and region input, including comment lines, and some lines left over from the
original Morse CG, but which are of little or no meaning in the context of Fluka: for instance the
arrays IR1 and IR2 (originally material and biasing assignment to regions, which in Fluka however
are not part of the geometry data). Other information concerns the memory allocation: FPD (Floating
Point Data) , INTEGER ARRAY, zone locations (“zone” and “region” in Fluka have a different meaning
than in Morse). “Code zones” indicates the subregions defined by the input OR operator.
The next sections, “Interpreted body echo” and “Interpreted region echo”, show the numbers assigned
by the program to bodies and regions defined by alphanumerical identifiers (if the traditional fixed
format has been used, these output sections are of little interest).
The interpreted echos are followed by the volumes used to normalise a possible output from option
SCORE (p. 226). Then, for each region in whose description the OR operator is used, one line similar
to the following is printed at the end of the geometry output:
*** Region # 2 Dnear according to no overlapping ORs ***
*** Region # 3 Dnear according to possible overlapping ORs ***
303
304 Main Output
This information concerns the possibility that random number sequences might not be reproducible, a
technical issue which does not affect the quality of the results but can be important for debugging or
other purposes (see a more detailed explanation in Note 4 to option GLOBAL) (p. 142).
– Basic nuclear data used in the program
The data reported are nuclear masses and model parameters used by the program. This part of the
output is constant and does not depend on the problem input (it is printed even if the calculation is
purely electromagnetic and does not depend on nuclear models).
– Information on physical models used in the run
The nuclear models used by Fluka to describe intermediate nuclear effects and interactions have been
continuously improved since 1989. They are automatically activated if the user choses the appropriate
defaults. Depending on the latter and on the input options used, an informative message is issued
concerning the presence of the following:
– Evaporation from residual nucleus
– Production of deexcitation gammas
– Transport of heavy evaporation products
– High-energy fission
– Fermi Break-Up
– Material quantities related to multiple scattering
The values of various quantities used by the Fluka multiple Coulomb scattering algorithm are printed
for each material and for each type of charged particle.
– Memory allocation information
Starting and ending location in memory of various arrays dynamically allocated are printed at different
points on main output, depending on the order of input cards.
– Table of correspondence between materials used in the run and materials in the low-
energy neutron cross section library
Example:
*** Fluka to low en. xsec material correspondence: printed atomic densities are
meaningless when used in a compound ***
Fluka medium Name Xsec medium atomic density Id. 1 Id. 2 Id. 3
number number ( at/(cm barn) )
1 BLCKHOLE 0 0.0000E+00 0 0 0
2 VACUUM 1000 0.0000E+00 0 0 0
6 CARBON 1 0.0000E+00 6 -2 293
7 NITROGEN 2 0.0000E+00 7 -2 293
8 OXYGEN 3 5.3787E-05 8 16 293
17 LEAD 5 3.2988E-02 82 -2 293
21 ARGON 4 0.0000E+00 18 -2 293
Compounds are not listed in this table, since for the time being the Fluka neutron library contains
only single elements or nuclides. “Fluka medium number” refers to the material number given by the
user via WHAT(4) in option MATERIAL; “Xsec medium number” is the material index used internally by
the Fluka low-energy neutron package. Such index is assigned only to library materials actually used
in the current problem, unlike “Fluka media” which can be pre-defined or defined in input, without
being actually assigned to any region.
Blackhole and vacuum are always assigned Xsec index 0 and 1000.
Atomic densities refer to the material in its elemental form and are printed as 0.0000E+00 if the
corresponding element is used only as part of a compound.
The last 3 columns in the table are the material identifiers unique to each library material (see 10.4,
p. 322 and option LOW–MAT, p. 158).
– Information on the low-energy neutron cross sections
If low-energy neutrons are transported, some problem-specific information may be printed, e.g., ma-
terials for which recoil or (n,p) protons are produced explicitly and not accounted for by kerma fac-
tors (usually hydrogen and nitrogen), or materials for which pointwise cross sections are used (see
LOW–NEUT, WHAT(6) and Note 5, p. 161). This is followed by generic information on the neutron
cross section library used (number of energy groups and angles, number of materials, etc.).
Main Output 305
If the user requests a more detailed printout (option LOW–NEUT, p. 160) the following information is
printed, depending on the value of WHAT(4):
If WHAT(4) = 1.0:
For each neutron energy group:
– group energy limits
– average energies
– velocities and momenta corresponding to the group energy limits of each gamma group
– thermal neutron velocities
For each material used: availability of residual nuclei information and, for each neutron energy group:
SIGT = total cross section in barn
SIGST = “scattering” cross section in barn. Actually equal to σ(n, n) + 2σ(n, 2n) + 3σ(n, 3n) etc.
PNUP = upscatter probability (can be different from zero only if the number of thermal groups is
more than one)
PNABS = Probability of Non-ABSorption (= scattering). It is = SIGST/SIGT, and can sometimes
be > 1 because of (n,xn) reactions
GAMGEN = GAMma GENeration probability = gamma production cross section divided by SIGT and
multiplied by the average number of gammas per (n,γ) reaction
NU*FIS = fission neutron production = fission cross section divided by SIGT and multiplied by ν
(nu), the average number of neutrons per fission
EDEP = kerma contribution in GeV per collision
PNEL, PXN, PFISS, PNGAM = partial cross sections , expressed as probabilities (i.e., ratios to
SIGT). In the order: non-elastic, (n,xn), fission, (n,γ)
The line: (RESIDUAL NUCLEI INFORMATIONS AVAILABLE), if present, indicates the possibility to use
option RESNUCLEi with WHAT(1) = 2.0 (p. 218).
If WHAT(4) = 2.0:
the same as above plus:
For each material used and for each neutron energy group:
the downscattering matrix (group-to-group transfer probabilities), as in the following example:
CROSS SECTIONS FOR MEDIA 4
.............................................................
(RESIDUAL NUCLEI INFORMATIONS AVAILABLE)
GROUP....DOWNSCATTER MATRIX
.............................................................
6....0.4927 0.0148 0.0006 0.0012 0.0017 0.0023 0.0028 0.0033
0.0038 0.0045 0.0056 0.0070 0.0087 0.0104 0.0120 0.0134
0.0149 0.0163 0.0175 0.0184 0.0190 0.0193 0.0193 0.0190
0.0185 0.0178 0.0164 0.0329 0.0311 0.0278 0.0247 0.0219
0.0198 0.0158 0.0126 0.0101 0.0112 0.0070 0.0026 0.0008
0.0004 0.0002 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000
.............................................................
The above table means: after scattering in material 4 of a neutron in energy group 6, the prob-
ability of getting a neutron in the same group is 49.27 %; that to get a neutron in the following
group (group 7) is 1.48 %, in group 8 is 0.06 % etc. This matrix, normalised to 1, gives the relative
probability of each neutron group: but the actual probability per collision must be obtained by
multiplying by PNABS, the scattering cross section divided by the total cross section and multiplied
by the average number of neutrons per non absorption reaction.
neutron-to-gamma group transfer probabilities, for instance:
306 Main Output
If WHAT(4) = 3.0:
the same as above plus:
For each material used and for each neutron energy group:
Cumulative scattering probabilities and scattering polar angle cosines as in the following
example:
1 SCATTERING PROBABILITIES AND ANGLES FOR MEDIA NUMBER 4
GP TO GP PROB ANGLE PROB ANGLE PROB ANGLE
.............................................................
6 6 0.8736 0.9557 0.9823 0.3741 1.0000 -0.6421
6 7 0.4105 0.8383 0.8199 0.1057 1.0000 -0.7588
6 8 0.4444 0.0001 0.7223 0.7747 1.0000 -0.7746
6 9 0.4444 -0.0001 0.7223 -0.7746 1.0000 0.7746
6 10 0.4444 0.0000 0.7223 -0.7746 1.0000 0.7746
6 11 -1.0000 0.0000 0.0000 0.0000 0.0000 0.0000
6 12 -1.0000 0.0000 0.0000 0.0000 0.0000 0.0000
6 13 -1.0000 0.0000 0.0000 0.0000 0.0000 0.0000
.............................................................
The above table reports 3 discrete angle cosines (corresponding to a Legendre P5 expansion )
for each group-to-group scattering combination, with the respective cumulative probabilities. For
instance, the line:
6 7 0.4105 0.8383 0.8199 0.1057 1.0000 -0.7588
means that neutron scattering from energy group 6 to group 7 has a 0.4105 probability to be at
a polar angle of 33 degrees (0.8383 = cos 33◦ ); a probability 0.8199 − 0.4105 = 0.4094 to be at
84◦ = arccos(0.1057); and a probability 1.000 − 0.8199 = 0.1801 to be at 139◦ = arccos(−0.7588).
A -1.0000 probability indicates an isotropic distribution.
– Table of available particle types
This table presents constant properties of all particles transported by Fluka (name, id-number, rest
mass and charge), plus two columns indicating:
– the particles which are discarded, by default (neutrinos) or on user’s request (option DISCARD ,
p. 104)
– the particle decay flag (see option PHYSICS with SDUM = DECAYS, p. 202)
The available generalised particles and their id-numbers are also listed.
– An expanded summary of the input:
This part of output summarises the most important input data, decoded and presented in a more
colloquial way.
(a) Beam properties
Information given by BEAM and BEAMPOS options (p. 71, 76): type of particle, en-
ergy/momentum, direction, etc. If a user SOURCE is used (p. 228, 363), it is indicated here
(b) Energy thresholds
Particle cutoff energies of hadrons and muons as set by default or by option PART–THR (p. 196).
Neutron cutoff means the threshold between high and low-energy (multi-group) neutron treat-
Main Output 307
ment. Low-energy neutron group cutoffs are reported by region in a separate table: see (f) below.
Electron and photon cutoffs are also reported in a separate table.
(c) Termination conditions The maximum number of histories and other ending options set in card
START (p. 231) are summarised here
(d) Multiple scattering (hadrons and muons) The logical flags printed are related to option MULSOPT
(p. 174) with SDUM = GLOBAL or GLOBHAD. The number of single scatterings to be performed
at boundary crossing is also printed.
(e) Treatment of electrons and photons, including multiple scattering. For historical reasons dating
from the time when Fluka was handling only high-energy particles, this title of this part is
“Electromagnetic Showers”. The logical flags which follow are related to option MULSOPT with
SDUM = GLOBAL or GLOBEMF. The number of single scatterings to be performed at boundary
crossing is also printed.
(f) Biasing parameters
This table reports several region-dependent biasing and cutoff parameters:
– Particle importances (set by WHAT(1) and WHAT(3) of option BIASING, p. 80)
– Russian Roulette factor (multiplicity biasing set by WHAT(2) of option BIASING)
– Cutoff group (WHAT(1) of option LOW–BIAS, p. 154)
– Group limit for non-analogue absorption (WHAT(1) of option LOW–BIAS)
– Non-analogue survival probability (WHAT(3) of option LOW–BIAS)
– Group limit for biased downscattering (WHAT(1) of option LOW–DOWN)
– Biasing downscattering factor (WHAT(2) of option LOW–DOWN)
(g) Estimators requested For each requested estimator (USRBIN, USRBDX, USRCOLL, USRTRACK,
USRYIELD, RESNUCLEi, DETECT), a complete description is printed (detector number, par-
ticle type, defining region(s) or binning limits, number of intervals/bins, area/volume, lin-
ear/logarithmic, type of quantity etc.). If the estimator output file is formatted, the same infor-
mation is printed also there in an identical format, otherwise it is available on the corresponding
binary file.
Note that the estimator detectors are numbered separately according to their estimator type (Bdrx
n. 1, Bdrx n. 2 etc.; Binning n. 1, Binning n. 2 etc. — independent from the type of
binning — Track n. 1, Track n. 2 etc.), in the order they appear in input. The estimator
type and the detector number can be passed (as variables ISCRNG and JSCRNG in COMMON SCOHLP
) to the user routines COMSCW and FLUSCW (p. 350, 352), to allow different kinds of weighting on
the scored quantities, depending on the detector number (see 13.2.2, 13.2.6)
(h) Materials defined and pre-defined
This table includes all materials pre-defined and not overridden by the user, plus those defined in
the user input via options MATERIAL and COMPOUND (p. 163, 84), independent from the fact
that they have been assigned or not to any region.
The different columns report the material number, name, atomic number Z and atomic weight A
(effective Z, A for compounds), density, inelastic and elastic scattering lengths for the primary
particles at the energy defined by option BEAM (p. 71) (meaningful only for primary hadrons),
radiation length (value not used by the program) and inelastic scattering length for neutrons at
the threshold momentum (by default 20 MeV unless overridden by PART–THR, p. 196).
For compounds, an insert is printed with the element composition, plus the atom fraction and
partial density of each component.
(i) dE/dx tabulations (if requested, see DELTARAY, p. 98)
For each assigned material and for each charged heavy particle (hadrons, muons, recoil ions) a
table is printed with the following data:
energy, unrestricted stopping power, η (= βγ), shell correction, restricted stopping power (ac-
cording to the threshold specified by the user with DELTARAY, WHAT(1)).
(j) Other stopping power information
The following is printed for each material used:
308 Main Output
gas pressure (if applicable), average excitation energy, effective Z/A, Sternheimer density effect
parameters, delta ray production, threshold, description level for stopping power fluctuations
(set with IONFLUCT, WHAT(1) and WHAT(3) , p. 145), and threshold for pair production and
bremsstrahlung by heavy particles (set with PAIRBREM, p. 194. p. 145), and threshold for pair ).
(k) Photonuclear reaction requests
A line of information is printed for each material in which muon photonuclear interactions have
been activated (see MUPHOTON, , p. 178). A similar information is printed for gamma photonu-
clear interactions, with the PHOTONUC flag for the relevant energy ranges (see p. 198).
(l) Table of correspondence between materials and regions “Correspondence of regions and EMF-
FLUKA material numbers and names”
This table corresponds to the ASSIGNMAt command (p. 66 ) and may be useful to check the ma-
terial assignment to the various regions, as well as the regions where a magnetic field is present.
The minimum step size set with option STEPSIZE (p. 232) is also reported. The last column refers
to the maximum step size for charged particles.
(m) Rayleigh scattering requests
A line of information is printed for each material in which Rayleigh scattering has been activated
(option EMFRAY, p. 122).
(n) Fluorescence requests
For each material for which fluorescence X-ray production has been requested, information about
the relevant photoelectric cross section edges is reported (option EMFFLUO, p. 120).
(o) Table of correspondence between regions and EMF materials/biasing
For each assigned material, this table reports the name, number (the internal numbering sequence
is different in the EMF part of Fluka), electron and photon energy cutoffs (set with EMFCUT,
p. 113), and four logical flags indicating whether some options have ben activated for the material
concerned (T means on, F off).
The meaning of the flags is:
BIAS −→ Leading Particle Biasing (LPB) (set by EMF–BIAS or EMFCUT, p. 108 and 113)
Ray. −→ Rayleigh scattering (set by EMFRAY with WHAT(1) ≥ 1.0, see p. 122)
S(q,Z) −→ Compton binding corrections (EMFRAY with WHAT(1) = 1., 3., 4. or 6.)
Pz(q,Z) −→ Compton Doppler broadening (EMFRAY with WHAT(1) = 4. or 6.)
The energy thresholds below which LPB is played for electrons and photons (WHAT(2) and
WHAT(3) of EMF–BIAS) are also reported, and so is the LPB bit-code for selected effects (WHAT(1)
of EMF–BIAS)
– Random number generator calls and CPU time for some histories During the calculation,
a couple of lines is printed time and again after a certain number of histories (or after each history,
depending on WHAT(5) of option START, p. 231)1 .
One of the two lines contains the number of random number generator calls, expressed in hexadecimal
format.
The second line reports, in the following order: the number of primary particles handled, the number
of particles still to be handled (with respect to the maximum requested by WHAT(1) of START), the
number of particles which can still be handled judging from the average time spent so far, the average
time per primary based on the histories already completed, and the estimated time still available with
respect to WHAT(3) of START.
The sequence of random number call lines is terminated by a message (FEEDER is the Fluka routine
which starts every history):
All cases handled by Feeder
Run termination forced from outside
if the run has been shortened by the Fluka “stop file”
Feeder ended due to timeout
if the time limit has been reached — see WHAT(6) of START or system-imposed time limit
1 Occasional warning messages printed during particle transport are found between the lines, especially if photonuclear
reactions have been activated: they have mainly a temporary debugging purpose and should be ignored
Main Output 309
235
as U, the “missing” energy can reach very large negative values.
Note that a similar, but more detailed energy balance can be obtained event by event with option
EVENTDAT (p. 314). See description below, in 9.5.3.
The file, written on output unit LUNGEO (16 by default) is a simple echo of the Combinatorial Geometry
input, in a different format. At input time, Fluka stores temporarily the geometry data on this file and
calculates the length of the various geometry arrays, which must include also additional information (e.g.,
the DNEAR value for each body, see Note 4 to option GLOBAL, p. 142). Then the data are retrieved and the
final memory allocation takes place.
The following type of message is also not important, and is especially frequent in runs with photonuclear
reactions activated:
*** Umfnst: eexany,eexdel,eexmin,amepar,enenew,np,ikpmx,eexnew,eexmax 0.
0.002 0.004319 1.11498839 1.09082268 2 0 0. 0.0171591096
Another type of informative message, indicating that a step counter has been reset because it was approaching
the upper limit for an integer, is the following:
*** Emfgeo: Ncoun 2000000000
Generally, messages issued by the geometry routines are more important. However, fatal ones are written
to standard output, for instance:
EXIT BEING CALLED FROM G1, NEXT REGION NOT FOUND
In such cases, it is recommended to run the geometry debugger (see command GEOEND on page 139) to find
and correct the error.
The following one indicates a real problem if repeated more than a few times:
[. . . skipped . . . ]
Particle index 3 total energy 5.189748600E-04 GeV Nsurf 0
We succeeded in saving the particle: current region is n. 2 (cell # 0)
As it can be seen, the program has some difficulty to track a particle in a certain direction, and it tries
to fix the problem by “nudging” the particle by a small amount, in case the problem is due to a rounding
error near a boundary. If the message appears often, it is recommended to run the geometry debugger
centring around the position reported in order to find if there is an error in the geometry description.
Other geometry errors concern particles with direction cosines not properly normalised. This happens
often with user routines where the user has forgotten to check that the sum of the squares be = 1.0D0
in double precision. For instance, the following message is generally caused by an inaccurate MAGFLD user
routine:
MAGNEW, TXYZ: ...[sum of the squares]... U,V,V: ...[3 cosines]...
This message indicates that insufficient memory has been allocated for the “contiguity list” (list of zones
contiguous to each zone, see 8.2.7.1, 8.2.7.3). This is not an actual error, but it is suggested that the user could
optimise computer time by increasing the values of the NAZ variable in the geometry region specifications.
If the formatted option is chosen, it is possible to write the estimator output as part of the main
output (logical output unit 11). It is also possible to write the results of more than one detector on the same
file. However, the task of post-processing analysis programs is easier if estimators of a different kind (e.g.,
USRBIN and USRBDX), or even detectors with a different structure (e.g., two USRBINs wit a different number
of bins), have their outputs directed to separate files.
– The title of the run (as given in input with option TITLE, p. 240).
– Date and time
– Total number of particles followed, and their total weight. (Note that the number of particles is written
in format I7, that may be insufficient for very large runs. In this case the value will be replaced by a
line of asterisks)
Output 313
Option DETECT (p. 101) produces only unformatted output (see DETECT description on p. 101 for instruc-
tions on how to read it). As for all other estimators (p. 307), a complete description in clear of the requested
scoring is printed on the standard output. For instance:
Detector n. 1 "COINC " , Ecutoff = 3.142E-07 GeV
1024 energy bins 2.717E-03 GeV wide, from 3.700E-03 to 2.786E+00 GeV
energy deposition in 1 regions, (n.: 3)
in coincidence with
energy deposition in 1 regions, (n.: 4)
Option EVENTBIN (p. 124) produces either unformatted or formatted output. The formatted output is
seldom used because of its size (the binning results, similar to those produced by option USRBIN (see 9.5.6
below), are printed after each primary event). As for most other estimators, a complete description in clear
of the requested scoring is printed also on the standard output. For instance:
Cartesian binning n. 1 "Eventscore" , generalised particle n. 208
X coordinate: from -1.5000E+02 to 1.5000E+02 cm, 75 bins ( 4.0000E+00 cm wide)
Y coordinate: from 1.0000E+02 to 2.0000E+02 cm, 50 bins ( 2.0000E+00 cm wide)
Z coordinate: from -2.0000E+01 to 1.8000E+02 cm, 20 bins ( 1.0000E+01 cm wide)
data will be printed on unit 21 (unformatted if < 0)
accurate deposition along the tracks requested
unnormalised data will be printed event by event
The header of the formatted output is practically identical to that of USRBIN, except for the words
“event by event” printed after the total number of particles:
***** Title (as provided by input command TITLE) *****
1
Cartesian binning n. 1 "Eventscore" , generalised particle n. 208
X coordinate: from -1.5000E+02 to 1.5000E+02 cm, 75 bins ( 4.0000E+00 cm wide)
Y coordinate: from 1.0000E+02 to 2.0000E+02 cm, 50 bins ( 2.0000E+00 cm wide)
Z coordinate: from -2.0000E+01 to 1.8000E+02 cm, 20 bins ( 1.0000E+01 cm wide)
Data follow in a matrix A(ix,iy,iz), format (1(5x,1p,10(1x,e11.4)))
The binning matrix is then printed once for each event (8000 times in the above example), every time
preceded by a line:
Binning n: 1, "Eventscore", Event #: 1, Primary(s) weight 1.0000E+00
................................................................................
Binning n: 1, "Eventscore", Event #: 8000, Primary(s) weight 1.0000E+00
As for most other estimators, the matrix is easily read and manipulated by a simple program, using
the format reported in the header.
314 EVENTDAT Output
Option EVENTDAT (p. 126) produces either unformatted or formatted output (see EVENTDAT description
for instructions on how to read an unformatted output). Unlike other estimators, no information is printed
on standard output.
The formatted output begins with run title and run time, followed by a short information about:
– Number of regions
– Number of generalised particle distributions requested
– History number
– Primary weight
– Primary energy
– Total energy balance for the current history, made of 12 contributions. Some of them correspond
to those found in the final balance printed at the end of the standard output, but in this case no
normalisation to the primary weight is made. Note that some of the contributions are meaningful only
in specific contexts (e.g., if low-energy neutron transport has been requested). No explanation is given
about the meaning of each contribution, which must be found here below in the order they are printed:
1 = energy deposited by ionisation
2 = energy deposited by π 0 , electrons, positrons and photons
3 = energy deposited by nuclear recoils and heavy fragments
4 = energy deposited by particles below threshold
5 = energy leaving the system
6 = energy carried by discarded particles
7 = residual excitation energy after evaporation
8 = energy deposited locally by low-energy neutrons (kerma)
9 = energy of particles outside the time limit
10 = energy lost in endothermic nuclear reactions above 50 MeV
11 = energy lost in endothermic low-energy neutron reactions
12 = missing energy
– Energy or stars (depending on the generalised particle scoring distribution) deposited or produced in
each region during the current history
– Random number generator information to be read in order to reproduce the current sequence (skipping
calls)
Example:
**** Event-Data ****
Energy deposition by protons in PbWO4
DATE: 1/ 5/ 5, TIME: 17:42:33
No. of regions. 3 No. of distr. scored 1
Event # 1
Primary Weight 1. Primary Energy 2. GeV
Contributions to the energy deposition
(GeV not normalised to the weight):
0.519268453 0.963951886 0.183623865 0. 0. 0.0999941751 0. 0.0797502846
0. 0. 0. 0.153411314
Generalised scoring distribution # 208
from first to last region:
0. 0.109102778 1.6374917
Seeds after event # 1
*** FADB81 0 0 0 0 0 33B49B1 0 0 0***
Output 315
Event # 2
Primary Weight 1. Primary Energy 2. GeV
Contributions to the energy deposition
(GeV not normalised to the weight):
1.04533529 0.827161014 0.00902671926 0. 0. 0. 0. 0.0179061908 0. 0.
0. 0.100570783
Generalised scoring distribution # 208
from first to last region:
0. 0.00186400011 1.89756525
Seeds after event # 2
*** 1034722 0 0 0 0 0 33B49B1 0 0 0***
......................................................................................
Option RESNUCLEi (p. 218) produces either formatted or unformatted output. For the latter, see RESNUCLEi
description for instructions on how to read it.
The formatted output begins with the same heading as the standard output (run title and run time), followed
by a short information about:
The information reported in 3, 4 and 5 is printed also in the expanded input summary on main output
(see (g), p. 307). For instance:
Res. nuclei n. 1 "Al-Region " , "high" energy products, region n. 3
detector volume: 1.0000E+00 cm**3
Max. Z: 86, Max. N-Z: 49 Min. N-Z: -4
data will be printed on unit 21 (unformatted if < 0)
On the formatted RESNUCLEi output, the above text is followed by one additional line explaining how
to read the result matrix which follows:
Data follow in a matrix A(z,n-z-k), k: -5 format (1(5x,1p,10(1x,e11.4)))
Here is an example of a simple program which can be used to display the same results in a more plain way:
PROGRAM READRN
CHARACTER*125 LINE, FILINP, FILOUT
PARAMETER (MAXZ = 86, MINNMZ = -4, MAXNMZ = 49, K = -5)
DIMENSION RESULT(MAXZ, MINNMZ-K:MAXNMZ-K)
WRITE(*,*) "Filename?"
READ(*,’(A)’) FILINP
316 USRBDX Output
DO 1 I = 1, 14
READ(1,’(A)’) LINE ! skip header lines
1 CONTINUE
READ(1,100,END=4) RESULT
4 CONTINUE
WRITE(2,’(A)’) ’ Z A Residual nuclei’
WRITE(2,’(A,/)’) ’ per cm**3 per primary’
DO 2 I = 1, MAXZ
DO 3 J = MINNMZ-K, MAXNMZ-K
IF(RESULT(I,J) .GT. 0.D0)
& WRITE(2,’(2I4,1P, G15.6)’) I, J+K+2*I, RESULT(I,J)
3 CONTINUE
2 CONTINUE
100 FORMAT(1(5X,1P,10(1X,E11.4)))
END
Option USRBDX (p. 246) produces either formatted or unformatted output (for the latter, see USRBDX
description for instructions on how to read it). As for most other estimators, a complete description in clear
of the requested scoring is printed also on the standard output. For instance:
After the title and date, and one line reporting the total number of particles and their weight, the
header of the formatted output is very similar to the above text:
As for most other estimators, the matrix is easily read and manipulated by a simple program, using
the format reported in the header. It can also be cut and pasted into a spreadsheet.
Output 317
Option USRBIN (p. 249) produces either formatted or unformatted output (for the latter, see USRBIN de-
scription for instructions on how to read it). As for most other estimators, a complete description in clear
of the requested scoring is printed also on the standard output. For instance:
Cartesian binning n. 1 "Cufront " , generalised particle n. 208
X coordinate: from -2.1100E-01 to 5.5910E+00 cm, 58 bins ( 1.0003E-01 cm wide)
Y coordinate: from 0.0000E+00 to 5.4010E+00 cm, 53 bins ( 1.0191E-01 cm wide)
Z coordinate: from 0.0000E+00 to -1.0000E-03 cm, 1 bins (-1.0000E-03 cm wide)
data will be printed on unit 21 (unformatted if < 0)
+/- Y symmetry requested and implemented
accurate deposition along the tracks requested
normalised (per unit volume) data will be printed at the end of the run
After the title and date, and one line reporting the total number of particles and their weight, the
header of the formatted output is very similar to the above text:
***** Roman Pot: box with windows *****
1
Cartesian binning n. 1 "Cufront " , generalised particle n. 208
X coordinate: from -2.1100E-01 to 5.5910E+00 cm, 58 bins ( 1.0003E-01 cm wide)
Y coordinate: from 0.0000E+00 to 5.4010E+00 cm, 53 bins ( 1.0191E-01 cm wide)
Z coordinate: from 0.0000E+00 to -1.0000E-03 cm, 1 bins (-1.0000E-03 cm wide)
Data follow in a matrix A(ix,iy,iz), format (1(5x,1p,10(1x,e11.4)))
As for most other estimators, the matrix is easily read and manipulated by a simple program, using
the format reported in the header. It can also be cut and pasted into a spreadsheet.
Option USRCOLL (p. 257) produces either formatted or unformatted output (for the latter, see USRTRACK
description for instructions on how to read it - the two options produce output with identical format).
As for most other estimators, a complete description in clear of the requested scoring is printed also
on the standard output. For instance:
Coll n. 1 "collogchf " , generalised particle n. 202, region n. 6
detector volume: 4.0000E+01 cm**3
Warning! Collision estimators not implemented for electrons/positrons and photons
logar. energy binning from 1.0000E-11 to 1.0000E+00 GeV, 1000 bins (ratio : 1.0257E+00)
data will be printed on unit 24 (unformatted if < 0)
After the title and date, and one line reporting the total number of particles and their weight, the
header of the formatted output is very similar to the above text:
***** Test collision estimator *****
1
318 USRYIELD Output
As for most other estimators, the matrix is easily read and manipulated by a simple program, using
the format reported in the header. It can also be cut and pasted into a spreadsheet.
Option USRTRACK (p. 262) produces either formatted or unformatted output (for the latter, see USRTRACK
description for instructions on how to read it).
As for most other estimators, a complete description in clear of the requested scoring is printed also
on the standard output. For instance:
Track n. 1 "tklogchb " , generalised particle n. 202, region n. 6
detector volume: 4.0000E+01 cm**3
logar. energy binning from 1.0000E-11 to 1.0000E+00 GeV, 1000 bins (ratio : 1.0257E+00)
data will be printed on unit -23 (unformatted if < 0)
After the title and date, and one line reporting the total number of particles and their weight, the
header of the formatted output is very similar to the above text:
***** Test track-length/coll. reading program for the manual *****
As for most other estimators, the matrix is easily read and manipulated by a simple program, using
the format reported in the header. It can also be cut and pasted into a spreadsheet.
Option USRYIELD (p. 265) produces either formatted or unformatted output (for the latter, see USRYIELD
description for instructions on how to read it).
As for most other estimators, a complete description in clear of the requested scoring is printed also
on the standard output. For instance:
Yield n. 1 "TotPi+(E) " , generalised particle n. 13, from region n. 3 to region n. 2
user normalisation: 1.0000E+00, adopted cross section (if any): 1.0000E+00 mb
logar. 1st variable binning from 1.0000E-03 to 5.0000E+01 100 bins (ratio : 1.1143E+00)
2nd variable ranges from 0.0000E+00 to 3.1416E+00
1st variable is: Laboratory Kinetic Energy
2nd variable is: Laboratory Angle (radians)
data will be printed on unit 21 (unformatted if < 0)
After the title and date, and one line reporting the total number of particles and their weight, the
header of the formatted output is very similar to the above text:
Output 319
As for most other estimators, the matrix is easily read and manipulated by a simple program, using
the format reported in the header. It can also be cut and pasted into a spreadsheet.
Transport of neutrons with energies lower than a certain energy is performed in Fluka by a multigroup
algorithm. The energy boundary below which multigroup transport takes over depends in principle on the
cross section library used. This energy is 20 MeV for the 260-groups library which is distributed with the
code.1
The multi-group technique, widely used in low-energy neutron transport programs, consists in dividing
the energy range of interest in a given number of intervals (“energy groups”). Elastic and inelastic reactions
are simulated not as exclusive processes, but by group-to-group transfer probabilities forming the so-called
downscattering matrix.
The scattering transfer probability between different groups is represented by a Legendre polynomial
expansion truncated at the (N+1)th term, as shown in the equation:
N
X 2i + 1
σs (g → g 0 , µ) = Pi (µ)σsi (g → g 0 )
i=0
4π
where µ = Ω · Ω 0 is the scattering angle and N is the chosen Legendre order of anisotropy.
The particular implementation used in Fluka has been derived from that of the Morse program [60]
(although the relevant part of the code has been completely rewritten). In the Fluka neutron cross section
library, the energy range up to 20 MeV is divided into 260 energy groups of approximately equal logarithmic
width (31 of which are thermal). The angular probabilities for inelastic scattering are obtained by a dis-
cretisation of a P5 Legendre polynomial expansion of the actual scattering distribution which preserves its
first 6 moments. The generalised Gaussian quadrature scheme to generate the discrete distribution is rather
complicated: details can be found in the Morse manual [60]. The result, in the case of a P5 expansion, is
a set of 6 equations giving 3 discrete polar angles (actually angle cosines) and 3 corresponding cumulative
probabilities.
The multigroup scheme adopted in Fluka is reliable and much faster than any possible approach using
continuous cross sections. However, it is important to remember that there are two rare situations where
the group approximation could give bad results.
1 In Fluka, there are two neutron energy thresholds: one for high-energy neutrons (set by option PART–THR) and one for
low-energy neutrons (set by option LOW–BIAS). The high-energy neutron threshold represents in fact the energy boundary
between continuous and discontinuous neutron transport.
320
Multigroup transport 321
One of such situations may occur when each neutron is likely to scatter only once (e.g., in a very thin
foil) before being scored: an artefact then is possible, due to the discrete angular distribution. In practice
the problem vanishes entirely, however, as soon as there is the possibility of two or more scatterings: it must
be kept in mind, in fact, that after a collision only the polar angle is sampled from a discrete distribution,
while the azimuthal angle is chosen randomly from a uniform distribution. In addition, the 3 discrete angles
are different for each g → g 0 combination and for each element or isotope. Thus, any memory of the initial
direction is very quickly lost after just a few collisions.
The second possible artefact is not connected with the angular but with the energy structure of the
cross sections used. The group structure is necessarily coarse with respect to the resonance structure in
many materials. A resonance in a material present in a dilute mixture or as a small piece cannot affect much
a smooth neutron flux (case of so-called “infinite dilution”) but if an isotope is very pure and is present in
large amounts, it can act as a “neutron sink”, causing sharp dips in the neutron spectrum corresponding to
each resonance. This effect, which results in a lower reaction rate σφ, is called self-shielding and is necessarily
lost in the process of cross section averaging over the width of each energy group, unless a special correction
is made. Such corrected cross section sets with different degrees of self-shielding have been included in the
Fluka libraries for a few important elements (Al, Fe, Cu, Au, Pb, Bi): but it is the responsibility of the user
to select the set with the degree of self-shielding most suitable in each different case. It is worth stressing
that non-self-shielded materials are perfectly adequate in most practical cases, because the presence of even
small amounts of impurities is generally sufficient to smooth out the effect. On the other hand, in regions of
non-resolved resonances the multigroup approach is known to give very good results anyway.
In general, gamma generation by low-energy neutrons (but not gamma transport) is treated in the frame
of a multigroup scheme too. A downscattering matrix provides the probability, for a neutron in a given
energy group, to generate a photon in each of a number of gamma energy groups (42 in the Fluka library),
covering the range from 1 keV to 50 MeV. With the exception of a few important gamma lines, such as the
2.2 MeV transition of Deuterium and the 478 keV photon from 10 B(n,α) reaction, the actual energy of the
generated photon is sampled randomly in the energy interval corresponding to its gamma group. Note that
the gamma generation matrix does not include only capture gammas, but also gammas produced in other
inelastic reactions such as (n,n0 ).
For a few elements (Cd, Xe, Ar), for which evaluated gamma production cross sections could not be
found, a different algorithm, based on published energy level data, has been provided to generate explicitly
the full cascade of monoenergetic gammas [77].
In all cases, the generated gammas are transported in the same way as all other photons in Fluka,
using continuous cross sections and an explicit and detailed description of all their interactions with matter,
allowing for the generation of electrons, positrons, and even secondary particles from photonuclear reactions.
In the multigroup transport scheme, the production of secondary neutrons via (n,xn) reactions is taken into
account implicitly by the so-called non-absorption probability, a group-dependent factor by which the weight
of a neutron is multiplied after exiting a collision. If the only possible reactions are capture and scattering,
the non-absorption probability is < 1, but at energies above the threshold for (n,2n) reaction it can take
values larger than 1.
Fission neutrons, however, are treated separately and created explicitly using a group-dependent
322 Neutron library
fission probability. They are assumed to be emitted isotropically and their energy is sampled from the
fission spectrum appropriate for the relevant isotope and neutron energy. The fission neutron multiplicity is
obtained separately from data extracted from European, American and Japanese databases.
Recoil protons and protons from N(n,p) reaction are produced and transported explicitly, taking into ac-
count the detailed kinematics of elastic scattering, continuous energy loss with energy straggling, delta ray
production, multiple and single scattering.
The same applies to light fragments (α,3 H) from neutron capture in 6 Li and 10 B, if pointwise transport
has been requested by the user. All other charged secondaries, including fission fragments (see 10.3.4), are
not transported but their energy is deposited at the point of interaction (kerma approximation).
For many materials, but not for all, group-dependent information on the residual nuclei produced by low-
energy neutron interactions is available in the Fluka libraries. This information can be used to score residual
nuclei, but it is important that the user check its availability before requesting scoring.
Fission fragments are sampled separately, using evaluated data extracted from European, American
and Japanese databases.
As explained in Chap. 3, an unformatted cross section data set, or library, is needed for low-energy neutron
transport. For a description of the algorithms used for tracking low-energy neutrons, see 10.1. Other useful
information can be found in the Notes to options LOW–NEUT (p. 160), LOW–MAT (p. 158) and LOW–BIAS
(p. 154).
The Legendre expansion used in Fluka is P5, i.e. at each collision the polar scattering angle is
sampled from three discrete values, such that the first 6 moments of the angular distribution are preserved
(the azimuthal angle is sampled instead from a uniform distribution between 0 and 2π). The energy group
structure depends on the cross section set used. Here below the group structure of the currently available
sets is reported, and a list of the materials they contain.
The default Fluka neutron cross section library (originally prepared by G. Panini of ENEA [57])
contains more than 250 different materials (natural elements or single nuclides), selected for their interest
in physics, dosimetry and accelerator engineering. This library has a larger number of groups and a better
resolution in the thermal energy range in respect to the original one.
The preparation of the library involves the use of a specialised code [141] and several ad-hoc programs
written to adjust the output to the particular structure of these libraries. The library is continuously enriched
and updated on the basis of the most recent evaluations (ENDF/B, JEF, JENDL etc.). The library format
is similar to that known as Anisn [62] (or FIDO) format, but it has been modified to include kerma factor
data, residual nuclei and partial exclusive cross sections when available. The latter are not used directly by
Fluka, but can be folded over calculated spectra to get reaction rates and induced activities.
More materials can be made available on request, if good evaluations are available. Some cross sections
are available in the library at two or three different temperatures mainly in view of simulations of calorimeters
containing cryogenic scintillators. Doppler broadening is taken into account.
Note that the energy groups are numbered are numbered in order of decreasing energy (group 1
corresponds to the highest energy).
The default Fluka neutron cross section library has 260 neutron groups and 42 gamma groups.
Gamma energy groups are used only for (n,gamma) production, since transport of photons in Fluka is
continuous in energy and angle and is performed through the Emf module).
Neutron library 323
Hydrogen cross sections, which have a particular importance in neutron slowing-down, are available
also for different types of molecular binding (free, H2 O, CH2 ).
At present, the Fluka libraries contain only single isotopes or elements of natural isotopic composi-
tion, although the possibility exists to include in future also pre-mixed materials.
Neutron energy deposition in most materials is calculated by means of kerma factors (including con-
tributions from low-energy fission). However, recoil protons and protons from N(n,p) reaction are produced
and transported explicitly (see 10.3.3 above).
Each material is identified by an alphanumeric name (a string not longer than 8 characters, all in upper
case), and by three integer identifiers. Correspondence with Fluka materials (standard or user-defined) is
based on any combination of name and zero or more identifiers. In case of ambiguity, the first material in
the list fulfilling the combination is selected. (See command LOW–MAT, p. 158, for more details).
The convention generally used (but there may be exceptions) for the three identifiers is:
1. Atomic number
2. Mass number, or natural isotopic composition if negative (exceptions are possible in order to distinguish
between data from different sources referring to the same nuclide)
3. Neutron temperature in degrees Kelvin
The neutron group structure of the 260-group data set is reported in Table 10.1. The corresponding gamma
42-group structure is reported in Table 10.2.
. . . Continues. . .
Neutron Upper Neutron Upper Neutron Upper Neutron Upper
group limit group limit group limit group limit
number GeV number GeV number GeV number GeV
A list of the materials for which cross sections are available in the new 260-group library is reported in
Table 10.3. The different columns of the Table contain, in the order:
[1]: the symbol of the nuclide (if the atomic mass number is not present, the cross sections refer to the
natural element composition)
[2]: a short description of the material
[3]: the temperature in degrees Kelvin at which the cross sections have been processed
[4]: the evaluated data file (origin) from which the data are derived
[5]: the availability of information on production of residual nuclei
[6]: the name with which Fluka refers to that material
Table 10.3: Materials for which cross sections are available in the new
Fluka neutron library
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
Residual
Material K Origin nuclei Name Identifiers n,γ
H H2 O bound nat. Hydrogen 296 ENDF/B–VIIR0 Yes HYDROGEN 1 -2 296 Yes
H CH2 bound nat. Hydrogen 296 ENDF/B–VIIR0 Yes HYDROGEN 1 -3 296 Yes
H Free gas natural Hydrogen 296 ENDF/B–VIIR0 Yes HYDROGEN 1 -5 296 Yes
H Free gas natural Hydrogen 87 ENDF/B–VIIR0 Yes HYDROGEN 1 -2 87 Yes
H Free gas natural Hydrogen 4 ENDF/B–VIIR0 Yes HYDROGEN 1 -5 4 Yes
H Free gas natural Hydrogen 430 ENDF/B–VIIR0 Yes HYDROGEN 1 -5 430 Yes
1
H H2 O bound Hydrogen 1 296 ENDF/B–VIIR0 Yes HYDROG-1 1 +1 296 Yes
. . . Continues. . .
326 260-group library: Available materials
. . . Continues. . .
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
Residual
Material K Origin nuclei Name Identifiers n,γ
1
H CH2 bound Hydrogen 1 296 ENDF/B–VIIR0 Yes HYDROG-1 1 +11 296 Yes
1
H Free gas Hydrogen 1 296 ENDF/B–VIIR0 Yes HYDROG-1 1 +31 296 Yes
1
H Free gas Hydrogen 1 87 ENDF/B–VIIR0 Yes HYDROG-1 1 +1 87 Yes
2
H D2O bound Deuterium 296 ENDF/B–VIIR0 Yes DEUTERIU 1 +2 296 Yes
2
H Free gas Deuterium 296 ENDF/B–VIIR0 Yes DEUTERIU 1 +32 296 Yes
2
H Free gas Deuterium 87 ENDF/B–VIIR0 Yes DEUTERIU 1 +2 87 Yes
3
H Free gas Tritium 296 ENDF/B–VIIR0 Yes TRITIUM 1 +4 296 Yes
3
H Free gas Tritium 87 ENDF/B–VIIR0 Yes TRITIUM 1 +4 87 Yes
He Natural Helium 296 ENDF/B–VIIR0 Yes HELIUM 2 -2 296 Yes
He Natural Helium 87 ENDF/B–VIIR0 Yes HELIUM 2 -2 87 Yes
He Natural Helium 4 ENDF/B–VIIR0 Yes HELIUM 2 -2 4 Yes
3
He Helium 3 296 ENDF/B–VIIR0 Yes HELIUM-3 2 +3 296 Yes
3
He Helium 3 87 ENDF/B–VIIR0 Yes HELIUM-3 2 +3 87 Yes
3
He Helium 3 4 ENDF/B–VIIR0 Yes HELIUM-3 2 +3 4 Yes
4
He Helium 4 296 ENDF/B–VIIR0 Yes HELIUM-4 2 +4 296 Yes
4
He Helium 4 87 ENDF/B–VIIR0 Yes HELIUM-4 2 +4 87 Yes
4
He Helium 4 4 ENDF/B–VIIR0 Yes HELIUM-4 2 +4 4 Yes
Li Natural Lithium 296 JENDL–3.3 Yes LITHIUM 3 -2 296 Yes
Li Natural Lithium 87 JENDL–3.3 Yes LITHIUM 3 -2 87 Yes
6
Li Lithium 6 296 JENDL–3.3 Yes LITHIU-6 3 +6 296 Yes
6
Li Lithium 6 87 JENDL–3.3 Yes LITHIU-6 3 +6 87 Yes
7
Li Lithium 7 296 JENDL–3.3 Yes LITHIU-7 3 +7 296 Yes
7
Li Lithium 7 87 JENDL–3.3 Yes LITHIU-7 3 +7 87 Yes
9
Be Beryllium 9 296 ENDF/B–VIIR0 Yes BERYLLIU 4 +9 296 Yes
9
Be Beryllium 9 87 ENDF/B–VIIR0 Yes BERYLLIU 4 +9 87 Yes
B Natural Boron 296 ENDF/B–VIIR0 Yes BORON 5 -2 296 Yes
B Natural Boron 87 ENDF/B–VIIR0 Yes BORON 5 -2 87 Yes
10
B Boron 10 296 ENDF/B–VIIR0 Yes BORON-10 5 +10 296 Yes
10
B Boron 10 87 ENDF/B–VIIR0 Yes BORON-10 5 +10 87 Yes
11
B Boron 11 296 ENDF/B–VIIR0 Yes BORON-11 5 +11 296 Yes
11
B Boron 11 87 ENDF/B–VIIR0 Yes BORON-11 5 +11 87 Yes
C Free gas natural Carbon 296 ENDF/B–VIIR0 Yes CARBON 6 -2 296 Yes
C Free gas natural Carbon 87 ENDF/B–VIIR0 Yes CARBON 6 -2 87 Yes
C Free gas natural Carbon 4 ENDF/B–VIIR0 Yes CARBON 6 -2 4 Yes
C Free gas natural Carbon 430 ENDF/B–VIIR0 Yes CARBON 6 -2 430 Yes
C Graphite bound natural Carbon 296 ENDF/B–VIIR0 Yes CARBON 6 -3 296 Yes
N Natural Nitrogen 296 ENDF/B–VIIR0 Yes NITROGEN 7 -2 296 Yes
N Natural Nitrogen 87 ENDF/B–VIIR0 Yes NITROGEN 7 -2 87 Yes
16
O Oxygen 16 296 ENDF/B–VIR8 Yes OXYGEN 8 +16 296 Yes
16
O Oxygen 16 87 ENDF/B–VIR8 Yes OXYGEN 8 +16 87 Yes
16
O Oxygen 16 4 ENDF/B–VIR8 Yes OXYGEN 8 +16 4 Yes
16
O Oxygen 16 430 ENDF/B–VIR8 Yes OXYGEN 8 +16 430 Yes
19
F Fluorine 19 296 ENDF/B–VIR8 Yes FLUORINE 9 +19 296 Yes
19
F Fluorine 19 87 ENDF/B–VIR8 Yes FLUORINE 9 +19 87 Yes
Ne Natural Neon 296 TENDL–10 Yes NEON 10 -2 296 Yes
23
Na Sodium 23 296 JENDL–3.3 Yes SODIUM 11 +23 296 Yes
23
Na Sodium 23 87 JENDL–3.3 Yes SODIUM 11 +23 87 Yes
Mg Natural Magnesium 296 JENDL–3.3 Yes MAGNESIU 12 -2 296 Yes
Mg Natural Magnesium 87 JENDL–3.3 Yes MAGNESIU 12 -2 87 Yes
27
Al Aluminium 27 296 ENDF/B–VIIR0 Yes ALUMINUM 13 +27 296 Yes
27
Al Aluminium 27 87 ENDF/B–VIIR0 Yes ALUMINUM 13 +27 87 Yes
27
Al Aluminium 27 4 ENDF/B–VIIR0 Yes ALUMINUM 13 +27 4 Yes
27
Al Aluminium 27 430 ENDF/B–VIIR0 Yes ALUMINUM 13 +27 430 Yes
27
Al Aluminium 27 SelfShielded 296 ENDF/B–VIIR0 Yes ALUMINUM 13 1027 296 Yes
27
Al Aluminium 27 SelfShielded 87 ENDF/B–VIIR0 Yes ALUMINUM 13 1027 87 Yes
27
Al Aluminium 27 SelfShielded 4 ENDF/B–VIIR0 Yes ALUMINUM 13 1027 4 Yes
. . . Continues. . .
260-group library: Available materials 327
. . . Continues. . .
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
Residual
Material K Origin nuclei Name Identifiers n,γ
27
Al Aluminium 27 SelfShielded 430 ENDF/B–VIIR0 Yes ALUMINUM 13 1027 430 Yes
Si Natural Silicon 296 ENDF/B–VIR8 Yes SILICON 14 -2 296 Yes
Si Natural Silicon 87 ENDF/B–VIR8 Yes SILICON 14 -2 87 Yes
31
P Phosphorus 31 296 JENDL–3.3 Yes PHOSPHO 15 31 296 Yes
31
P Phosphorus 31 87 JENDL–3.3 Yes PHOSPHO 15 31 87 Yes
S Natural Sulphur (1) 296 JENDL–3.3 Yes SULFUR 16 -2 296 Yes
S Natural Sulphur (1) 87 JENDL–3.3 Yes SULFUR 16 -2 87 Yes
Cl Natural Chlorine 296 ENDF/B–VIIR0 Yes CHLORINE 17 -2 296 Yes
Cl Natural Chlorine 87 ENDF/B–VIIR0 Yes CHLORINE 17 -2 87 Yes
Ar Natural Argon 296 JEFF–3.1 Yes ARGON 18 -2 296 Yes
Ar Natural Argon 87 JEFF–3.1 Yes ARGON 18 -2 87 Yes
Ar Natural Argon SelfShielded 296 JEFF–3.1 Yes ARGON 18 -4 296 Yes
Ar Natural Argon SelfShielded 87 JEFF–3.1 Yes ARGON 18 -4 87 Yes
40
Ar Argon 40 296 JEFF–3.1 Yes ARGON-40 18 40 296 Yes
40
Ar Argon 40 87 JEFF–3.1 Yes ARGON-40 18 40 87 Yes
40
Ar Argon 40 SelfShielded 296 JEFF–3.1 Yes ARGON-40 18 1040 296 Yes
40
Ar Argon 40 SelfShielded 87 JEFF–3.1 Yes ARGON-40 18 1040 87 Yes
K Natural Potassium 296 ENDF/B–VIIR0 Yes POTASSIU 19 -2 296 Yes
K Natural Potassium 87 ENDF/B–VIIR0 Yes POTASSIU 19 -2 87 Yes
Ca Natural Calcium (2) 296 ENDF/B–VIIR0 Yes CALCIUM 20 -2 296 Yes
Ca Natural Calcium (2) 87 ENDF/B–VIIR0 Yes CALCIUM 20 -2 87 Yes
45
Sc Scandium 45 (2) 296 ENDF/B-VIIR0 Yes SCANDIUM 21 45 296 Yes
45
Sc Scandium 45 (2) 87 ENDF/B-VIIR0 Yes SCANDIUM 21 45 87 Yes
Ti Natural Titanium (3) 296 ENDF/B–VIR8 Yes TITANIUM 22 -2 296 Yes
Ti Natural Titanium (3) 87 ENDF/B–VIR8 Yes TITANIUM 22 -2 87 Yes
V Natural Vanadium 296 JENDL–3.3 Yes VANADIUM 23 -2 296 Yes
V Natural Vanadium 87 JENDL–3.3 Yes VANADIUM 23 -2 87 Yes
Cr Natural Chromium 296 ENDF/B–VIR8 Yes CHROMIUM 24 -2 296 Yes
Cr Natural Chromium 87 ENDF/B–VIR8 Yes CHROMIUM 24 -2 87 Yes
Cr Natural Chromium 4 ENDF/B–VIR8 Yes CHROMIUM 24 -2 4 Yes
Cr Natural Chromium 430 ENDF/B–VIR8 Yes CHROMIUM 24 -2 430 Yes
55
Mn Manganese 55 296 ENDF/B–VIR8 Yes MANGANES 25 55 296 Yes
55
Mn Manganese 55 87 ENDF/B–VIR8 Yes MANGANES 25 55 87 Yes
55
Mn Manganese 55 4 ENDF/B–VIR8 Yes MANGANES 25 55 4 Yes
55
Mn Manganese 55 430 ENDF/B–VIR8 Yes MANGANES 25 55 430 Yes
Fe Natural Iron 296 ENDF/B–VIR8 Yes IRON 26 -2 296 Yes
Fe Natural Iron 87 ENDF/B–VIR8 Yes IRON 26 -2 87 Yes
Fe Natural Iron 4 ENDF/B–VIR8 Yes IRON 26 -2 4 Yes
Fe Natural Iron 430 ENDF/B–VIR8 Yes IRON 26 -2 430 Yes
Fe Natural Iron SelfShielded 296 ENDF/B–VIR8 Yes IRON 26 -4 296 Yes
Fe Natural Iron SelfShielded 87 ENDF/B–VIR8 Yes IRON 26 -4 87 Yes
Fe Natural Iron SelfShielded 4 ENDF/B–VIR8 Yes IRON 26 -4 4 Yes
Fe Natural Iron SelfShielded 430 ENDF/B–VIR8 Yes IRON 26 -4 430 Yes
Fe Shielding Fe (5% C) SelfShielded 296 ENDF/B–VIR8 Yes IRON 26 -8 296 Yes
Fe Shielding Fe (5% C) SelfShielded 87 ENDF/B–VIR8 Yes IRON 26 -8 87 Yes
Fe Shielding Fe (5% C) SelfShielded 4 ENDF/B–VIR8 Yes IRON 26 -8 4 Yes
Fe Shielding Fe (5% C) SelfShielded 430 ENDF/B–VIR8 Yes IRON 26 -8 430 Yes
Co Natural Cobalt 296 ENDF/B–VIIR0 Yes COBALT 27 59 296 Yes
Co Natural Cobalt 87 ENDF/B–VIIR0 Yes COBALT 27 59 87 Yes
Co Natural Cobalt 4 ENDF/B–VIIR0 Yes COBALT 27 59 4 Yes
Co Natural Cobalt 430 ENDF/B–VIIR0 Yes COBALT 27 59 430 Yes
Ni Natural Nickel 296 ENDF/B–VIR8 Yes NICKEL 28 -2 296 Yes
Ni Natural Nickel 87 ENDF/B–VIR8 Yes NICKEL 28 -2 87 Yes
Ni Natural Nickel 4 ENDF/B–VIR8 Yes NICKEL 28 -2 4 Yes
Ni Natural Nickel 430 ENDF/B–VIR8 Yes NICKEL 28 -2 430 Yes
Cu Natural Copper 296 ENDF/B–VIR8 Yes COPPER 29 -2 296 Yes
. . . Continues. . .
328 260-group library: Available materials
. . . Continues. . .
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
Residual
Material K Origin nuclei Name Identifiers n,γ
Cu Natural Copper 87 ENDF/B–VIR8 Yes COPPER 29 -2 87 Yes
Cu Natural Copper 4 ENDF/B–VIR8 Yes COPPER 29 -2 4 Yes
Cu Natural Copper 430 ENDF/B–VIR8 Yes COPPER 29 -2 430 Yes
Cu Natural Copper SelfShielded 296 ENDF/B–VIR8 Yes COPPER 29 -4 296 Yes
Cu Natural Copper SelfShielded 87 ENDF/B–VIR8 Yes COPPER 29 -4 87 Yes
Cu Natural Copper SelfShielded 4 ENDF/B–VIR8 Yes COPPER 29 -4 4 Yes
Cu Natural Copper SelfShielded 430 ENDF/B–VIR8 Yes COPPER 29 -4 430 Yes
Zn Natural Zinc 296 JENDL–4.0 Yes ZINC 30 -2 296 Yes
Zn Natural Zinc 87 JENDL–4.0 No ZINC 30 -2 87 Yes
Zn Natural Zinc 4 JENDL–4.0 Yes ZINC 30 -2 4 Yes
Zn Natural Zinc 430 JENDL–4.0 Yes ZINC 30 -2 430 Yes
Ga Natural Gallium (2),(4) 296 JEFF–3.1 Yes GALLIUM 31 -2 296 Yes
Ga Natural Gallium (2),(4) 87 JEFF–3.1 Yes GALLIUM 31 -2 87 Yes
Ge Natural Germanium 296 ENDF/B–VII Yes GERMANIU 32 -2 296 Yes
Ge Natural Germanium 87 ENDF/B–VII Yes GERMANIU 32 -2 87 Yes
75
As Arsenic 75 296 ENDF/B–VIIR0 Yes ARSENIC 33 75 296 Yes
75
As Arsenic 75 87 ENDF/B–VIIR0 Yes ARSENIC 33 75 87 Yes
Br Natural Bromine (2) 296 ENDF/B–VIIR0 Yes BROMINE 35 -2 296 No
Br Natural Bromine (2) 87 ENDF/B–VIIR0 Yes BROMINE 35 -2 87 No
Kr Natural Krypton 296 ENDF/B–VIIR0 Yes KRYPTON 36 -2 296 No
Kr Natural Krypton 120 ENDF/B–VIIR0 Yes KRYPTON 36 -2 120 No
Sr Natural Strontium 296 ENDF/B–VIIR0 Yes STRONTIU 38 -2 296 No
Sr Natural Strontium 87 ENDF/B–VIIR0 Yes STRONTIU 38 -2 87 No
90
Sr Strontium 90 296 ENDF/B–VIIR0 Yes 90-SR 38 90 296 No
90
Sr Strontium 90 87 ENDF/B–VIIR0 Yes 90-SR 38 90 87 No
89
Y Yttrium 89 296 ENDF/B–VIR8 Yes YTTRIUM 38 90 296 No
89
Y Yttrium 89 87 ENDF/B–VIR8 Yes YTTRIUM 38 90 87 No
Zr Natural Zirconium (2) 296 ENDF/B–VIIR0 Yes ZIRCONIU 40 -2 296 Yes
Zr Natural Zirconium (2) 87 ENDF/B–VIIR0 Yes ZIRCONIU 40 -2 87 Yes
93
Nb Niobium 93 (2) 296 ENDF/B-VIIR0 Yes NIOBIUM 41 93 296 Yes
93
Nb Niobium 93 (2) 87 ENDF/B-VIIR0 Yes NIOBIUM 41 93 87 Yes
Mo Natural Molybdenum (2) 296 EFF–2.4 Yes MOLYBDEN 42 -2 296 Yes
Mo Natural Molybdenum (2) 87 EFF–2.4 Yes MOLYBDEN 42 -2 87 Yes
99
Tc Technetium 99 296 ENDF/B–VIIR0 Yes 99-TC 43 99 296 Yes
99
Tc Technetium 99 87 ENDF/B–VIIR0 Yes 99-TC 43 99 87 Yes
Pd Natural Palladium 296 ENDF/B–VIR8 Yes PALLADIU 46 -2 296 Yes
Pd Natural Palladium 87 ENDF/B–VIR8 Yes PALLADIU 46 -2 87 Yes
Ag Natural Silver 296 ENDF/B–VIIR0 Yes SILVER 47 -2 296 Yes
Ag Natural Silver 87 ENDF/B–VIIR0 Yes SILVER 47 -2 87 Yes
Cd Natural Cadmium (2) 296 JENDL–3.3 Yes CADMIUM 48 -2 296 Yes
Cd Natural Cadmium (2) 87 JENDL–3.3 Yes CADMIUM 48 -2 87 Yes
In Natural Indium (2) 296 ENDF/B–VIIR0 Yes INDIUM 49 -2 296 No
In Natural Indium (2) 87 ENDF/B–VIIR0 Yes INDIUM 49 -2 87 No
Sn Natural Tin 296 ENDF/B–VIR8 Yes TIN 50 -2 296 No
Sn Natural Tin 87 ENDF/B–VIR8 Yes TIN 50 -2 87 No
Sb Natural Antimony 296 ENDF/B–VIIR0 Yes ANTIMONY 51 -2 296 No
Sb Natural Antimony 87 ENDF/B–VIIR0 Yes ANTIMONY 51 -2 87 No
127
I Iodine 127 296 ENDF/B–IIR0 Yes IODINE 53 127 296 Yes
127
I Iodine 127 87 ENDF/B–VIIR0 Yes IODINE 53 127 87 Yes
129
I Iodine 129 (2) 296 ENDF/B–VIIR0 Yes 129-I 53 129 296 No
129
I Iodine 129 (2) 87 ENDF/B–VIIR0 Yes 129-I 53 129 87 No
Xe Natural Xenon (2) 296 ENDF/B–VIIR0 Yes XENON 54 -2 296 No
Xe Natural Xenon (2) 87 ENDF/B–VIIR0 Yes XENON 54 -2 87 No
124
Xe Xenon 124 296 ENDF/B–VIIR0 Yes 124-XE 54 124 296 No
124
Xe Xenon 124 87 ENDF/B–VIIR0 Yes 124-XE 54 124 87 No
. . . Continues. . .
260-group library: Available materials 329
. . . Continues. . .
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
Residual
Material K Origin nuclei Name Identifiers n,γ
126
Xe Xenon 126 296 ENDF/B–VIIR0 Yes 126-XE 54 126 296 No
126
Xe Xenon 126 87 ENDF/B–VIIR0 Yes 126-XE 54 126 87 No
128
Xe Xenon 128 296 ENDF/B–VIIR0 Yes 128-XE 54 128 296 No
128
Xe Xenon 128 87 ENDF/B–VIIR0 Yes 128-XE 54 128 87 No
129
Xe Xenon 129 296 ENDF/B–VIIR0 Yes 129-XE 54 129 296 No
129
Xe Xenon 129 87 ENDF/B–VIIR0 Yes 129-XE 54 129 87 No
130
Xe Xenon 130 296 ENDF/B–VIIR0 Yes 130-XE 54 130 296 No
130
Xe Xenon 130 87 ENDF/B–VIIR0 Yes 130-XE 54 130 87 No
131
Xe Xenon 131 296 ENDF/B–VIIR0 Yes 131-XE 54 131 296 Yes
131
Xe Xenon 131 87 ENDF/B–VIIR0 Yes 131-XE 54 131 87 Yes
132
Xe Xenon 132 296 ENDF/B–VIIR0 Yes 132-XE 54 132 296 No
132
Xe Xenon 132 87 ENDF/B–VIIR0 Yes 132-XE 54 132 87 No
134
Xe Xenon 134 296 ENDF/B–VIIR0 Yes 134-XE 54 134 296 No
134
Xe Xenon 134 87 ENDF/B–VIIR0 Yes 134-XE 54 134 87 No
135
Xe Xenon 135 296 ENDF/B–VIIR0 Yes 135-XE 54 135 296 No
135
Xe Xenon 135 87 ENDF/B–VIIR0 Yes 135-XE 54 135 87 No
136
Xe Xenon 136 296 ENDF/B–VIIR0 Yes 136-XE 54 136 296 No
136
Xe Xenon 136 87 ENDF/B–VIIR0 Yes 136-XE 54 136 87 No
133
Cs Cesium 133 296 ENDF/B–VIIR0 Yes CESIUM 55 133 296 Yes
133
Cs Cesium 133 87 ENDF/B–VIIR0 Yes CESIUM 55 133 87 Yes
135
Cs Cesium 135 (2) 296 ENDF/B–VIIR0 Yes 135-CS 55 135 296 No
135
Cs Cesium 135 (2) 87 ENDF/B–VIIR0 Yes 135-CS 55 135 87 No
137
Cs Cesium 137 (2) 296 ENDF/B–VIIR0 Yes 137-CS 55 137 296 No
137
Cs Cesium 137 (2) 87 ENDF/B–VIIR0 Yes 137-CS 55 137 87 No
Ba Natural Barium (2) 296 ENDF/B–VIIR0 Yes BARIUM 56 -2 296 No
Ba Natural Barium (2) 87 ENDF/B–VIIR0 Yes BARIUM 56 -2 87 No
La Natural Lanthanum 296 ENDF/B–VIIR0 Yes LANTHANU 57 -2 296 No
La Natural Lanthanum 87 ENDF/B–VIIR0 Yes LANTHANU 57 -2 87 No
Ce Natural Cerium (2) 296 ENDF/B–VIIR0 Yes CERIUM 58 -2 296 No
Ce Natural Cerium (2) 87 ENDF/B–VIIR0 Yes CERIUM 58 -2 87 No
Nd Natural Neodymium 296 ENDF/B–VIIR0 Yes NEODYMIU 60 -2 296 Yes
Nd Natural Neodymium 87 ENDF/B–VIIR0 Yes NEODYMIU 60 -2 87 Yes
Sm Natural Samarium (2) 296 ENDF/B–VIIR0 Yes SAMARIUM 62 -2 296 Yes
Sm Natural Samarium (2) 87 ENDF/B–VIIR0 Yes SAMARIUM 62 -2 87 Yes
Eu Natural Europium 296 ENDF/B–VIR8 Yes EUROPIUM 62 -2 296 Yes
Eu Natural Europium 87 ENDF/B–VIR8 Yes EUROPIUM 62 -2 87 Yes
Gd Natural Gadolinium 296 ENDF/B–VIIR0 Yes GADOLINI 64 -2 296 Yes
Gd Natural Gadolinium 87 ENDF/B–VIIR0 Yes GADOLINI 64 -2 87 Yes
159
Tb Terbium 159 296 ENDF/B–VIIR0 Yes TERBIUM 65 159 296 No
159
Tb Terbium 159 87 ENDF/B–VIIR0 Yes TERBIUM 65 159 87 No
Lu Natural Lutetium 296 ENDF/B-VIIR0 Yes LUTETIUM 71 -2 296 Yes
Lu Natural Lutetium 87 ENDF/B-VIIR0 Yes LUTETIUM 71 -2 87 Yes
Hf Natural Hafnium 296 JENDL–3.3 Yes HAFNIUM 72 -2 296 Yes
Hf Natural Hafnium 87 JENDL–3.3 Yes HAFNIUM 72 -2 87 Yes
181
Ta Tantalum 181 (2) 296 JENDL–3.3 Yes TANTALUM 73 181 296 Yes
181
Ta Tantalum 181 (2) 87 JENDL–3.3 Yes TANTALUM 73 181 87 Yes
181
Ta Tantalum 181 SelfShielded (2) 296 JENDL–3.3 Yes TANTALUM 73 1181 296 Yes
181
Ta Tantalum 181 SelfShielded (2) 87 JENDL–3.3 Yes TANTALUM 73 1181 87 Yes
W Natural Tungsten (2) 296 ENDF/B–VIIR0 Yes TUNGSTEN 74 -2 296 Yes
W Natural Tungsten (2) 87 ENDF/B–VIIR0 Yes TUNGSTEN 74 -2 87 Yes
W Natural Tungsten (2) 4 ENDF/B–VIIR0 Yes TUNGSTEN 74 -2 4 Yes
W Natural Tungsten (2) 430 ENDF/B–VIIR0 Yes TUNGSTEN 74 -2 430 Yes
W Natural Tungsten SelfShielded (2) 296 ENDF/B–VIIR0 Yes TUNGSTEN 74 -4 296 Yes
W Natural Tungsten SelfShielded (2) 87 ENDF/B–VIIR0 Yes TUNGSTEN 74 -4 87 Yes
W Natural Tungsten SelfShielded (2) 4 ENDF/B–VIIR0 Yes TUNGSTEN 74 -4 4 Yes
. . . Continues. . .
330 260-group library: Available materials
. . . Continues. . .
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
Residual
Material K Origin nuclei Name Identifiers n,γ
W Natural Tungsten SelfShielded (2) 430 ENDF/B–VIIR0 Yes TUNGSTEN 74 -4 430 Yes
Re Natural Rhenium 296 ENDF/B-VIIR0 Yes RHENIUM 75 -2 296 No
Re Natural Rhenium 87 ENDF/B-VIIR0 Yes RHENIUM 75 -2 87 No
Ir Natural Iridium 296 ENDF/B-VIIR0 Yes IRIDIUM 77 -2 296 Yes
Ir Natural Iridium 87 ENDF/B-VIIR0 Yes IRIDIUM 77 -2 87 Yes
Pt Natural Platinum 296 JEFF–3.1.1 No PLATINUM 78 -2 296 Yes
Pt Natural Platinum 87 JEFF–3.1.1 No PLATINUM 78 -2 87 Yes
197
Au Gold 197 296 ENDF/B–VIIR0 Yes GOLD 79 197 296 Yes
197
Au Gold 197 87 ENDF/B–VIIR0 Yes GOLD 79 197 87 Yes
197
Au Gold 197 SelfShielded 296 ENDF/B–VIIR0 Yes GOLD 79 1197 296 Yes
197
Au Gold 197 SelfShielded 87 ENDF/B–VIIR0 Yes GOLD 79 1197 87 Yes
197
Au Gold 197 0.1mm SelfShielded 296 ENDF/B–VIIR0 Yes GOLD 79 2197 296 Yes
Hg Natural Mercury (2) 296 ENDF/B–VIIR0 Yes MERCURY 80 -2 296 Yes
Hg Natural Mercury (2) 87 ENDF/B–VIIR0 Yes MERCURY 80 -2 87 Yes
Pb Natural Lead 296 ENDF/B–VIR8 Yes LEAD 82 -2 296 Yes
Pb Natural Lead 87 ENDF/B–VIR8 Yes LEAD 82 -2 87 Yes
Pb Natural Lead SelfShielded 296 ENDF/B–VIR8 Yes LEAD 82 -4 296 Yes
Pb Natural Lead SelfShielded 87 ENDF/B–VIR8 Yes LEAD 82 -4 87 Yes
209
Bi Bismuth 209 296 ENDF/B–VIR8 Yes BISMUTH 83 209 296 Yes
209
Bi Bismuth 209 87 ENDF/B–VIR8 Yes BISMUTH 83 209 87 Yes
209
Bi Bismuth 209 SelfShielded 296 ENDF/B–VIR8 Yes BISMUTH 83 1209 296 Yes
209
Bi Bismuth 209 SelfShielded 87 ENDF/B–VIR8 Yes BISMUTH 83 1209 87 Yes
230
Th Thorium 230 296 ENDF/B–VIIR0 Yes 230-TH 90 230 296 No
230
Th Thorium 230 87 ENDF/B–VIIR0 Yes 230-TH 90 230 87 No
232
Th Thorium 232 296 ENDF/B–VIR8 Yes 232-TH 90 232 296 Yes
232
Th Thorium 232 87 ENDF/B–VIR8 Yes 232-TH 90 232 87 Yes
233
U Uranium 233 296 ENDF/B–VIIR0 Yes 233-U 92 233 296 Yes
233
U Uranium 233 87 ENDF/B–VIIR0 Yes 233-U 92 233 87 Yes
234
U Uranium 234 296 ENDF/B–VIIR0 Yes 234-U 92 234 296 Yes
234
U Uranium 234 87 ENDF/B–VIIR0 Yes 234-U 92 234 87 Yes
235
U Uranium 235 296 ENDF/B–VIIR0 Yes 235-U 92 235 296 Yes
235
U Uranium 235 87 ENDF/B–VIIR0 Yes 235-U 92 235 87 Yes
238
U Uranium 238 296 ENDF/B–VIIR0 Yes 238-U 92 238 296 Yes
238
U Uranium 238 87 ENDF/B–VIIR0 Yes 238-U 92 238 87 Yes
239
Pu Plutonium 239 296 ENDF/B–VIIR0 Yes 239-PU 94 239 296 Yes
239
Pu Plutonium 239 87 ENDF/B–VIIR0 Yes 239-PU 94 239 87 Yes
241
Am Americium 241 296 ENDF/B–VIIR0 Yes 241-AM 95 241 296 Yes
241
Am Americium 241 87 ENDF/B–VIIR0 Yes 241-AM 95 241 87 Yes
Collision tape
11.1 What is a collision tape and what is its purpose
A “collision tape” is a file where quantities describing selected events are recorded in the course of a Fluka
run.
This file is the standard output of the MGDRAW user routine, that can be customised by the user to get
different and/or more complete output (see description of user routine MGDRAW in 13.2.13)
Note that “event” would be a more appropriate word than “collision”, and “file” better than “tape”.
For historical reasons, however, the expression “collision tape” is used in Monte Carlo jargon rather than
“event file”. It is true that most interesting events are generally collision events (but also boundary crossings,
decays, etc.), and that the large size of the file may require the use of a magnetic tape (or at least, that was
often the case in the past). Recently, the expression “phase space file” has also been used.
There are several reasons for which the user might decide to write a collision tape. Some examples are:
1) to perform a non-standard analysis or scoring. In general, this is not recommended because the
available Fluka scoring facilities are reliable, efficient and well tested. However, there may be special
cases where a user-written scoring is necessary.
2) to save details of transport for a new independent analysis. In this case, however, the user must
make sure that no phase-space region of interest be undersampled because of biasing options in the
corresponding run. As a general rule, writing of a collision file is not recommended in non-analogue
(biased) calculations.
3) to connect Fluka to other radiation transport codes (now less likely than in the past, since Fluka
covers most energy ranges and transports most particles which can be of interest).
4) to split the transport problem into two or more sequential phases. A technique used in deep penetration
calculations, which can be considered as an extension of splitting, consists in recording all particles
crossing a given boundary (with their energy, weight, coordinates and direction cosines at the point of
crossing), and to sample repeatedly source particles from that set in a successive run [66]. A special
subroutine SOURCE (p. 363) must be written for this purpose. The user must make sure that the final
normalisation is done with respect to the total particle weight used in the first step of the procedure, and
not to that of the second step. It is also recommended to assign blackhole to all materials immediately
beyond the recording boundary, to avoid that backscattered particles be considered twice.
Fluka allows to write a complete dump of each source particle, of each trajectory and of each energy
deposition event, possibly under event-driven conditions specified by the user (see description of user routine
MGDRAW in 13.2.13).
331
332 How to write a collision tape
1) particle trajectories,
By default, data are written on the collision tape in single precision and unformatted, but it is also possible
for the user to modify the MGDRAW subroutine and to obtain a more customised output file (see 13.2.13).
The variables written by the default version of MGDRAW, and their number, differ in the three cases.
The sign of the first (integer) variable dumped at an event indicates how to interpret the following ones:
First record:
NTRACK, MTRACK, JTRACK, ETRACK, WTRACK (three integers and two real variables)
Next record:
(XTRACK(I), YTRACK(I), ZTRACK(I), I = 0, NTRACK),
(DTRACK(J), J = 1, MTRACK), CTRACK
(that is, NTRACK + MTRACK + 1 real variables)
where:
First record:
0, ICODE, JTRACK, ETRACK, WTRACK (three integers and two real variables)
Next record:
XSCO, YSCO, ZSCO, RULL (4 real variables)
where:
How to write a collision tape 333
ICODE = 3x: call from subroutine KASNEU (low-energy neutron part of Fluka)
= 30: target recoil
= 31: neutron below threshold
= 32: escape
ICODE = 4x: call from subroutine KASHEA (heavy ion part of Fluka)
= 40: escape
ICODE = 5x: call from subroutine KASOPH (optical photon part of Fluka)
= 50: optical photon absorption
= 51: escape
First record:
-NCASE, NPFLKA, NSTMAX, TKESUM, WEIPRI, (three integers and two real variables)
Next record:
(ILOFLK(I), ETOT(I), WTFLK(I), XFLK(I), YFLK(I), ZFLK(I), TXFLK(I), TYFLK(I),
TZFLK(I), I = 1, NPFLKA) (NPFLKA times (that is, one integer + 8 real variables))
where:
In this respect, the user has the responsibility of issuing the right input directives: the code does not
perform any physics check on the assumptions about the light yield and the properties of material.
Optical photons (Fluka id = -1) are treated according to the laws of geometrical optics and therefore
can be reflected and refracted at boundaries between different materials. From the physics point of view,
optical photons have a certain energy (sampled according to the generation parameters given by the user) and
carry along their polarisation information. Cherenkov photons are produced with their expected polarisation,
while scintillation photons are assumed to be unpolarised. At each reflection or refraction, polarisation is
assigned or modified according to optics laws derived from Maxwell equations.
At a boundary between two materials with different refraction index, an optical photon is propagated
(refracted) or reflected with a relative probability calculated according to the laws of optics.
Furthermore, optical photons can be absorbed in flight (if the user defines a non zero absorption
coefficient for the material under consideration) or elastically scattered (Rayleigh scattering) if the user
defines a non zero diffusion coefficient for the material under consideration).
In order to deal with optical photon problems, two specific input commands are available to the user:
See p. 183 and 187 for a detailed description of these options and of their parameters.
Some user routines are also available for a more complete representation of the physical problem:
ii. RFLCTV: to specify the reflectivity of a material. This can be activated by card OPT–PROP with
SDUM = METAL and WHAT(3) < -99.
iii. OPHBDX: to set optical properties of a boundary surface. The call is activated by card OPT–PROP with
SDUM = SPEC–BDX.
iv. FRGHNS: to set a possible degree of surface roughness , in order to have both diffusive and specular
reflectivity from a given material surface.
v. QUEFFC: to request a detailed quantum efficiency treatment. This is activated by card OPT–PROP
with SDUM = SENSITIV, setting the 0th optical photon sensitivity parameter to a value lesser than -99
(WHAT(1) < -99).
All running values of optical photon tracking are contained in COMMON TRACKR , just as for the other
ordinary elementary particles (see 13.1.1, 13.2.13).
334
handling optical photons 335
That option sets the quantum efficiency as a function of photon energy overall through the problem
and it is not material/region dependent. The reason is that it is applied “a priori” at photon generation
time (for obvious time saving reasons). Here below is an explanation taken directly from the code.
Summarising, the yes/no detection check is done at production and not at detection: this in order to
substantially cut down CPU time. If one wants all photons to be produced the sensitivity must be set = 1.
Then it is still possible to apply a quantum efficiency curve at detection, by means of the user weighting
routine FLUSCW (see 13.2.6) or by a user-written off-line code.
Since the quantum efficiency curve provided by OPT–PROD with SDUM = SENSITIV is applied at
production and not at detection, it is not known which material the photon will eventually end up in.
Furthermore, WHAT(5) must be set anyway equal to the maximum quantum efficiency over the photon
energy range under consideration. One cannot use the QUEFFC routine as a way to provide an initial screening
on the produced photons, i.e., to use a “safe” initial guess for the quantum efficiency (say, for instance 20%)
and then, at detection, refine it through more sophisticated curves, i.e., rejecting against the actual quantum
efficiency/0.2 (this again can be done in routine FLUSCW). This makes sense of course if the user has different
quantum efficiency curves for different detectors (one should use in QUEFFC the curve that maximises all of
them and then refine it by rejection case by case), or if the quantum efficiency is position/angle dependent
upon arrival on the photomultiplier (again one should use inside QUEFFC the quantum efficiency for the most
efficient position/angle and refine by rejection at detection time.
Optical photons are absorbed in those materials where the user selected properties dictate absorption,
i.e., metals or materials with a non zero absorption cross section. These absorption events can be detected
in different ways. For instance:
a) through energy deposition by particle -1 (optical photons have always id = -1), photons usually
deposit all energy in one step (since only absorption and coherent scattering are implemented). So one
can check for JTRACK = -1 and energy deposition (RULL) in a given region (e.g., the photo-cathode
of the PMT). One can also apply an extra quantum efficiency selection, e.g., using the COMSCW user
routine.
b) through boundary crossing of particles -1 into the given region, however this is correct if and only if
absorption is set such that the photon will not survive crossing the region. Again further selections
can be performed, e.g., using the FLUSCW user routine.
Scintillation light (section 12.2.4). A specific user routine, giving the refraction index of Liquid Argon as a
function of wavelength is also shown (section 12.2.2).
It is a very simple case, in which muons are generated inside a box filled with Liquid Argon. Notice
that at present it is not yet possible to request optical photons as primary particles via the BEAM card.
Therefore light must be generated starting from ordinary particles, or by a special user-written SOURCE
routine, where optical photons are loaded into their dedicated stack (OPPHST) instead of that of ordinary
particles (FLKSTK). An example of such SOURCE is shown in section 12.2.1.
The examples presented here consider 0.5 GeV muons in a box of 4 × 4 × 4 m3 . In order to avoid
unnecessary complications in the example, secondary particle production by muons is switched off. Of course
this is not required in real problems.
As far as the output is concerned, the following example proposes a standard energy spectrum scoring
at a boundary (option USRBDX) applied to optical photons, together with a user-specific output built via
the MGDRAW user routine (see 13.2.13), where a dump of optical photon tracking is inserted. At the end of
this section (in section 12.2.1) we will propose the relevant code lines to be inserted in MGDRAW (activated by
the USERDUMP card, p. 241), together with an example of readout (section 12.2.6).
INCLUDE ’(DBLPRC)’
INCLUDE ’(DIMPAR)’
INCLUDE ’(IOUNIT)’
*
*----------------------------------------------------------------------*
* *
* Copyright (C) 1990-2009 by Alfredo Ferrari & Paola Sala *
* All Rights Reserved. *
* *
* *
* New source for FLUKA9x-FLUKA20xy: *
* *
* Created on 07 january 1990 by Alfredo Ferrari & Paola Sala *
* Infn - Milan *
* *
* Last change on 08-feb-09 by Alfredo Ferrari *
* *
* This is just an example of a possible user written source routine. *
* note that the beam card still has some meaning - in the scoring the *
* maximum momentum used in deciding the binning is taken from the *
* beam momentum. Other beam card parameters are obsolete. *
* *
* Output variables: *
* *
* Nomore = if > 0 the run will be terminated *
* *
*----------------------------------------------------------------------*
*
INCLUDE ’(BEAMCM)’
INCLUDE ’(FHEAVY)’
INCLUDE ’(FLKSTK)’
INCLUDE ’(IOIOCM)’
INCLUDE ’(LTCLCM)’
INCLUDE ’(PAPROP)’
handling optical photons 337
INCLUDE ’(SOURCM)’
INCLUDE ’(SUMCOU)’
INCLUDE ’(OPPHST)’
INCLUDE ’(TRACKR)’
*
LOGICAL LFIRST
*
SAVE LFIRST
DATA LFIRST / .TRUE. /
*======================================================================*
* *
* BASIC VERSION *
* *
*======================================================================*
NOMORE = 0
* +-------------------------------------------------------------------*
* | First call initializations:
IF ( LFIRST ) THEN
* | *** The following 3 cards are mandatory ***
TKESUM = ZERZER
LFIRST = .FALSE.
LUSSRC = .TRUE.
* | *** User initialization ***
END IF
* |
* +-------------------------------------------------------------------*
* Push one source particle to the stack. Note that you could as well
* push many but this way we reserve a maximum amount of space in the
* stack for the secondaries to be generated
* LSTOPP is the stack counter: of course any time source is called it
* must be =0
IJBEAM = -1
LSTOPP = LSTOPP + 1
* Weight of optical photon
WTOPPH (LSTOPP) = ONEONE
WEIPRI = WEIPRI + WTOPPH (LSTOPP)
NUMOPH = NUMOPH + 1
IF ( NUMOPH .GT. 1000000000 ) THEN
MUMOPH = MUMOPH + 1
NUMOPH = NUMOPH - 1000000000
END IF
WOPTPH = WOPTPH + ONEONE
*
* Insert in POPTPH (LSTOPP) the proper energy for optical photon
*
POPTPH (LSTOPP) = 4.D-09
DONEAR (LSTOPP) = ZERZER
* Injection coordinates of optical photon
XOPTPH (LSTOPP) = XBEAM
YOPTPH (LSTOPP) = YBEAM
ZOPTPH (LSTOPP) = ZBEAM
* Initial direction cosines of optical photon
TXOPPH (LSTOPP) = UBEAM
TYOPPH (LSTOPP) = VBEAM
TZOPPH (LSTOPP) = WBEAM
* Set-up the polarization vector
TXPOPP (LSTOPP) = -TWOTWO
TYPOPP (LSTOPP) = ZERZER
TZPOPP (LSTOPP) = ZERZER
* age
AGOPPH (LSTOPP) = ZERZER
* total path
338 optical photons
Notice that in this example a check is performed on the material number. In the following problems, the
light will be generated on material no. 18. In order to avoid problems a Fluka abort is generated if the
routine is called by mistake for a different material.
*
*=== Rfrndx ===========================================================*
*
DOUBLE PRECISION FUNCTION RFRNDX ( WVLNGT, OMGPHO, MMAT )
INCLUDE ’(DBLPRC)’
INCLUDE ’(DIMPAR)’
INCLUDE ’(IOUNIT)’
*
*----------------------------------------------------------------------*
* *
* user-defined ReFRaction iNDeX: *
* *
* Created on 19 September 1998 by Alfredo Ferrari & Paola Sala *
* Infn - Milan *
* *
* Last change on 25-Oct-02 by Alfredo Ferrari *
* *
* *
*----------------------------------------------------------------------*
*
INCLUDE ’(FLKMAT)’
*
* Check on the material number
*
IF ( MMAT .NE. 18 ) THEN
CALL FLABRT ( ’RFRNDX’, ’MMAT IS NOT SCINTILLATOR!’ )
END IF
*
WL = WVLNGT * 1.D+07
RFRNDX = ONEONE
handling optical photons 339
Cherenkov light generation depends on the refraction index. Among the different possibilities, here we have
chosen to give the refraction index by means of the user routine shown above. The relevant data cards are
commented. The value inserted for light absorption in this example is arbitrary, while the mean free path for
Rayleigh scattering is the result obtained from measurements performed in the framework of the ICARUS
collaboration.
TITLE
Test of Cherenkov light production in Liquid Argon
DEFAULTS PRECISIO
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+
BEAM -10.000 MUON+
BEAMPOS 0.0 0.0 190.0 NEGATIVE
DELTARAY -1.0 18.0 18.0
PAIRBREM -3.0 18.0 18.0
MUPHOTON -1.0 18.0 18.0
PHOTONUC -1.0 3.0 100.0
IONTRANS -6.0
DISCARD 27.0 28.0 43.0 44.0 5.0 6.0
GEOBEGIN COMBINAT
Test
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+
* A large box for the blackhole
RPP 1 -9999999. +9999999. -9999999. +9999999. -9999999. +9999999.
* A smaller box for for liquid argon
RPP 2 -200.0 +200.0 -200.0 +200.0 -200.0 +200.0
END
*== Region Definitions =================================================
* 1) Blackhole
BL1 +1 -2
* 2) Liquid Argon
LG3 +2
END
GEOEND
* Switch off electron and photon transport
EMF EMF-OFF
*
MATERIAL 18.0 0.0 1.400 18.0 ARGON
* Select neutron cross sections at liquid argon temperature
LOW-MAT 18.0 18.0 -2.0 87.0 ARGON
*
ASSIGNMAT 1.0 1.0 500. 1.0 0.0
ASSIGNMAT 18.0 2.0 2.0
*
* Set Light production/transport properties: from 100 to 600 nm in all materials
OPT-PROP 1.000E-05 3.500E-05 6.000E-05 3.0 100.0 WV-LIMIT
* Set all materials to "metal" with 0 reflectivity:
OPT-PROP 1.0 3.0 100.0 METAL
* resets all previous properties for material n. 18 (Liquid Argon)
OPT-PROP 18.0 RESET
* switches off scintillation light production in material n. 18 (Liq. Argon)
OPT-PROD 18.0 SCIN-OFF
* defines Cherenkov production for material n. 18 (Liq. Argon)
340 optical photons
Here it is necessary to point out that, at present, Fluka can generate scintillation lines only for monochro-
matic lines. A maximum number of 3 different lines is possible. The value inserted here (128 nm) is the
correct one for Liquid Argon. The fraction of deposited energy going into scintillation light depends on the
degree of recombination after ionisation. Again, the value used here is a parameter justified in the framework
of the ICARUS collaboration, where about 20000 photons/MeV of deposited energy have been measured for
the electric field of 500 V/cm (the field used in ICARUS). A different electric field intensity will change the
degree of recombination and therefore the light yield.
TITLE
Test of scintillation light production in Liquid Argon
DEFAULTS PRECISIO
BEAM -0.5000 MUON+
BEAMPOS 0.0 0.0 199.0 NEGATIVE
*
DELTARAY -1.0 18.0 18.0
PAIRBREM -3.0 18.0 18.0
MUPHOTON -1.0 18.0 18.0
PHOTONUC -1.0 3.0 100.0
IONTRANS -6.0
DISCARD 27.0 28.0 43.0 44.0 5.0 6.0
GEOBEGIN COMBINAT
Test
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+
* A large box for the blackhole
RPP 1 -9999999. +9999999. -9999999. +9999999. -9999999. +9999999.
* A SMALLER BOX FOR FOR LIQUID ARGON
RPP 2 -200.0 +200.0 -200.0 +200.0 -200.0 +200.0
END
*== Region Definitions =================================================
* 1) Blackhole
BL1 +1 -2
* 2) Liquid Argon
LG3 +2
END
GEOEND
*
handling optical photons 341
EMF EMF-OFF
*
MATERIAL 18.0 0.0 1.400 18.0 ARGON
LOW-MAT 18.0 18.0 -2.0 87.0 ARGON
ASSIGNMAT 1.0 1.0 500. 1.0 0.0
ASSIGNMAT 18.0 2.0 2.0
*
* Set Light production/transport properties: from 100 to 600 nm in all materials
OPT-PROP 1.000E-05 1.280E-05 6.000E-05 3.0 100.0 WV-LIMIT
* Set all materials to "metal" with 0 reflectivity:
OPT-PROP 1.0 3.0 100.0 METAL
* resets all previous properties for material n. 18 (Liquid Argon)
OPT-PROP 18.0 RESET
* switches off Cherenkov light production in material n. 18 (Liquid Argon)
OPT-PROD 18.0 CERE-OFF
* defines Scint. light production for material n. 18 (Liq. Argon). Parameters:
* a) wavelength (cm) of first scintillation line.
* b) fraction of deposited energy going into scint. light
* (in Liquid Argon ~ 2 10**4 photons/MeV)
OPT-PROD 1.280E-05 9.686E-02 18.0 SCINT-WV
* The following card restores the wave-length limits for material n. 18
OPT-PROP 1.000E-05 1.280E-05 6.000E-05 18.0 WV-LIMIT
* The following card, for material n. 18:
* a) calls the RFRNDX user routine (to define the refraction index
* vs wave-length (WHAT(1)< -99)
* b) sets to 1000 cm the mean free path for absorption.
* c) sets to 90 cm the mean free path for Rayleigh scattering
OPT-PROP -100.0 0.001 0.01111 18.0
* The following card defines the "Sensitivity" in order to introduce the
* maximum Quantum Efficiency at generation level. Here 1/10 of photons are
* actually generated.
* Fluctuations are properly sampled
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+
OPT-PROP 0.1 0.1 SENSITIV
SCORE 208.0 211.0 201.0 210.0
RANDOMIZ 1.0
*
USRBDX 1.0 -1.0 -55.0 2.0 1.0 Opt.Phot
USRBDX 12.0E-09 0.0 120.0 &
USERDUMP 111. 2. MGDRAW
START 10000.
STOP
The user can request any kind of standard Fluka output for optical photons and also a user specific output,
starting for instance from the MGDRAW user routine. Here an example follows, where a few variables are simply
recorded in the output “collision tape” (dump file) at each step in the tracking only for particle id = -1
(optical photons).
It can be useful, in order to exploit the flags available in this routine, and explained in the different
comments, to know that the Fluka routine which drives the transport of optical photons is KASOPH.
The content of COMMON TRACKR can be used to take full advantage of the possibilities offered by the
MGDRAW routine (see 13.2.13).
Warning: in the present version of Fluka, there is not yet the possibility of using the User Particle
Properties for optical photons (variables SPAUSR, ISPUSR and the STUPRF user routine)
* *
342 optical photons
INCLUDE ’(DBLPRC)’
INCLUDE ’(DIMPAR)’
INCLUDE ’(IOUNIT)’
*
*----------------------------------------------------------------------*
* *
* MaGnetic field trajectory DRAWing: actually this entry manages *
* all trajectory dumping for *
* drawing *
* *
* Created on 01 march 1990 by Alfredo Ferrari *
* INFN - Milan *
* last change 05-may-06 by Alfredo Ferrari *
* INFN - Milan *
* *
*----------------------------------------------------------------------*
*
INCLUDE ’(CASLIM)’
INCLUDE ’(COMPUT)’
INCLUDE ’(SOURCM)’
INCLUDE ’(FHEAVY)’
INCLUDE ’(FLKSTK)’
INCLUDE ’(GENSTK)’
INCLUDE ’(MGDDCM)’
INCLUDE ’(PAPROP)’
INCLUDE ’(QUEMGD)’
INCLUDE ’(SUMCOU)’
INCLUDE ’(TRACKR)’
*
DIMENSION DTQUEN ( MXTRCK, MAXQMG )
*
CHARACTER*20 FILNAM
LOGICAL LFCOPE
SAVE LFCOPE
DATA LFCOPE / .FALSE. /
*
*----------------------------------------------------------------------*
* *
* Icode = 1: call from Kaskad *
* Icode = 2: call from Emfsco *
* Icode = 3: call from Kasneu *
* Icode = 4: call from Kashea *
* Icode = 5: call from Kasoph *
* *
*----------------------------------------------------------------------*
* *
IF ( .NOT. LFCOPE ) THEN
LFCOPE = .TRUE.
IF ( KOMPUT .EQ. 2 ) THEN
FILNAM = ’/’//CFDRAW(1:8)//’ DUMP A’
ELSE
FILNAM = CFDRAW
END IF
WRITE(*,*) ’TRAJECTORY OPEN!’
WRITE(*,’(A)’) ’FILNAM = ’,FILNAM
OPEN ( UNIT = IODRAW, FILE = FILNAM, STATUS = ’NEW’, FORM =
& ’UNFORMATTED’ )
END IF
handling optical photons 343
C
C Write trajectories of optical photons
C
IF(JTRACK .EQ. -1) THEN
WRITE (IODRAW) NTRACK, MTRACK, JTRACK, SNGL (ETRACK),
& SNGL (WTRACK)
WRITE (IODRAW) ( SNGL (XTRACK (I)), SNGL (YTRACK (I)),
& SNGL (ZTRACK (I)), I = 0, NTRACK ),
& ( SNGL (DTRACK (I)), I = 1,MTRACK ),
& SNGL (CTRACK)
WRITE(IODRAW) SNGL(CXTRCK),SNGL(CYTRCK),SNGL(CZTRCK)
ENDIF
RETURN
*
*======================================================================*
* *
* Boundary-(X)crossing DRAWing: *
* *
* Icode = 1x: call from Kaskad *
* 19: boundary crossing *
* Icode = 2x: call from Emfsco *
* 29: boundary crossing *
* Icode = 3x: call from Kasneu *
* 39: boundary crossing *
* Icode = 4x: call from Kashea *
* 49: boundary crossing *
* Icode = 5x: call from Kasoph *
* 59: boundary crossing *
* *
*======================================================================*
* *
ENTRY BXDRAW ( ICODE, MREG, NEWREG, XSCO, YSCO, ZSCO )
RETURN
*
*======================================================================*
* *
* Event End DRAWing: *
* *
*======================================================================*
* *
ENTRY EEDRAW ( ICODE )
RETURN
*
*======================================================================*
* *
* ENergy deposition DRAWing: *
* *
* Icode = 1x: call from Kaskad *
* 10: elastic interaction recoil *
* 11: inelastic interaction recoil *
* 12: stopping particle *
* 13: pseudo-neutron deposition *
* 14: escape *
* 15: time kill *
* Icode = 2x: call from Emfsco *
* 20: local energy deposition (i.e. photoelectric) *
* 21: below threshold, iarg=1 *
* 22: below threshold, iarg=2 *
* 23: escape *
* 24: time kill *
* Icode = 3x: call from Kasneu *
* 30: target recoil *
344 optical photons
RETURN
*=== End of subroutine Mgdraw ==========================================*
END
A sample program to readout the output obtained from the previously shown MGDRAW routine is presented
here below. In this example the key routine in the one called VXREAD, where some trivial output is sent to
logical units 66 and 67. Of course the user must adapt such a readout program to his own needs.
PROGRAM MGREAD
CHARACTER FILE*80
*
WRITE (*,*)’ Name of the binary file?’
READ (*,’(A)’) FILE
OPEN ( UNIT = 33, FILE = FILE, STATUS =’OLD’,
& FORM = ’UNFORMATTED’ )
1000 CONTINUE
WRITE (*,*)’ Event number?’
READ (*,*) NCASE
IF ( NCASE .LE. 0 ) STOP
CALL VXREAD (NCASE)
GO TO 1000
END
DO J=1,NTRACK+1
WRITE(67,*) XH(J),YH(J),ZH(J),
& CXX,CYY,CZZ
END DO
IF ( NEVT.EQ.NCASE ) THEN
WRITE(66,*)’ New step:’
WRITE(66,*)’ Part.id.:’,JTRACK,’ Kin.En.:’,ETRACK,
& ’ N.of substep:’, NTRACK
WRITE(66,*)’ X, Y, Z, i=0, # substep’
WRITE(*,*)’ New step:’
WRITE(*,*)’ Part.id.:’,JTRACK,’ Kin.En.:’,ETRACK,
& ’ N.of substep:’, NTRACK
WRITE(*,*)’ X, Y, Z, i=0, # substep’
END IF
* | |
* | +----------------------------------------------------------------*
* | | Energy deposition data:
ELSE IF ( NDUM .EQ. 0 ) THEN
ICODE1=MDUM/10
ICODE2=MDUM-ICODE1*10
IJDEPO=JDUM
ENPART=EDUM
WDEPOS=WDUM
READ (LUNSCR) XSCO, YSCO, ZSCO, ENDEPO
IF ( NEVT.EQ.NCASE ) THEN
WRITE(66,*) ’ En. dep. code n.:’,MDUM
WRITE(66,*) IJDEPO,’ Tot. en. proj.:’, ENPART,
& ’ Weight:’,WDEPOS
WRITE(66,*) ’ Position:’,XSCO,YSCO,ZSCO,
& ’ En. Dep.:’,ENDEPO
END IF
* | |
* | +----------------------------------------------------------------*
* | | Source particle:
ELSE
NEVT =-NDUM
LPRIMA = MDUM
NSTMAX = JDUM
TKESUM = EDUM
WEIPRI = WDUM
READ (LUNSCR) ( IPR(J),EPR(J),WPR(J),XPR(J),YPR(J),
& ZPR(J),TXP(J),TYP(J),TZP(J),J=1,LPRIMA )
DO J = 1, LPRIMA
IF ( ABS(IPR(J)) .LT. 10000 ) THEN
LPTRUE=J
END IF
END DO
LPROJ = LPRIMA - LPTRUE
LPRIMA = LPTRUE
IF (NEVT .EQ. NCASE) THEN
WRITE(66,*)’ Event #’,NEVT
IF ( LPROJ .GT. 0) THEN
WRITE(66,*)
& ’ Original projectile(s),n. of:’,LPROJ
DO IJ = 1, LPROJ
J=LPRIMA+IJ
IPR(J) = IPR(J)/10000
WRITE(66,*) ’ Part.id.:’,IPR(J),’ Kin.en.:’,
& EPR(J),’ Weight:’,WPR(J)
WRITE(66,*) IPR(J),EPR(J),WPR(J)
WRITE(66,*) ’ Position :’, XPR(J),YPR(J),ZPR(J)
WRITE(66,*) ’ Direction:’, TXP(J),TYP(J),TZP(J)
handling optical photons 347
END DO
END IF
WRITE(66,*)’ Source particle(s), n. of:’,LPRIMA
DO J = 1, LPRIMA
WRITE(66,*) ’ Part.id.:’,IPR(J),’ Kin.en.:’,
& EPR(J),’ Weight:’,WPR(J)
WRITE(66,*) ’ Position :’, XPR(J),YPR(J),ZPR(J)
WRITE(66,*) ’ Direction:’, TXP(J),TYP(J),TZP(J)
C WRITE(67,*) XPR(J)/1.E+05,YPR(J)/1.E+05,ZPR(J)/1.E+05
END DO
END IF
IF (NEVT.GT.NCASE) GO TO 4100
END IF
* | |
* | +----------------------------------------------------------------*
4000 CONTINUE
* |
* +-------------------------------------------------------------------*
4100 CONTINUE
RETURN
END
4
Chapter 13
User routines
Unlike some other Monte Carlo particle transport codes, Fluka gets its input mainly from a simple file.
It offers a rich choice of options for scoring most quantities of possible interest and for applying different
variance reduction techniques, without requiring the users to write a single line of code. However, although
normally there is no need for any “user code”, there are special cases where this is unavoidable, either because
of the complexity of the problem, or because the desired information is too unusual or too problem-specific
to be offered as a standard option.
And on the other hand, even when this is not strictly necessary, experienced programmers may like
to create customised input/output interfaces. A number of user routines (available on LINUX and UNIX
platforms in directory usermvax) allow to define non-standard input and output, and in some cases even to
modify to a limited extent the normal particle transport. Most of them are already present in the Fluka
library as dummy or template routines, and require a special command in the standard input file to be
activated. Users can modify any one of these routines, and even insert into them further calls to their own
private ones, or to external packages (at their own risk!). This increased flexibility must be balanced against
the advantage of using as far as possible the Fluka standard facilities, which are known to be reliable and
well tested.
1) make a modified copy of one or more of these routines. It is recommended that each modified routine
should always print an informative message when called for the first time, to confirm that it has
been successfully activated, for future documentation, and to avoid misinterpretations of the standard
output. It is important to remember that when calling modified user routines, the units, titles etc.
reported in the normal Fluka output become often meaningless.
A typical way to do this is:
..............
LOGICAL LFIRST
SAVE LFIRST
DATA LFIRST /.TRUE./
* return message from first call
IF (LFIRST) THEN
WRITE(LUNOUT,*) ’Version xxx of Routine yyy called’
LFIRST = .FALSE.
ENDIF
..............
IMPORTANT: The user should not modify the value of any argument in a routine calling list, except
when marked as “returned” in the description of the routine here below.
Similarly, no variable contained in COMMON blocks should be overwritten unless explicitly indicated.
2) compile the modified routines (with the fff script on LINUX/UNIX):
$FLUPRO/flutil/fff yyy.f (produces a new file yyy.o)
3) link them (with the lfluka script on LINUX/UNIX) to the Fluka library and any additional library
of interest (for instance CERNLIB):
$FLUPRO/flutil/lfluka -o myfluka -m fluka yyy.o
This will produce a new executable (indicated here as myfluka).
To run the new executable, launch the usual rfluka script with the option -e myfluka.
348
User routines 349
It is recommended that at least the following lines be present at the beginning of each routine:
INCLUDE ’(DBLPRC)’
INCLUDE ’(DIMPAR)’
INCLUDE ’(IOUNIT)’
Each INCLUDE file contains a COMMON block, plus related constants. Additional INCLUDEs may be useful, in
particular BEAMCM, CASLIM, EMFSTK, SOURCM, EVTFLG, FHEAVY, GENSTK, LTCLCM, FLKMAT, RESNUC,
SCOHLP, SOUEVT, FLKSTK, SUMCOU, TRACKR, USRBIN, USRBDX, USRTRC, USRYLD. Note the parentheses
which are an integral part of the Fluka INCLUDE file names.
Files flukaadd.add and emfadd.add, or directory $FLUPRO/flukapro, contain a full documentation about
the meaning of the variables of these INCLUDE files.
Function ABSCFF returns a user-defined absorption coefficient for optical photons . It is activated by setting
WHAT(2) < -99 in command OPT–PROP, with SDUM = blank. See p. 187 and Chap. 12 for more information.
Argument list
IJ : particle type (1 = proton, 8 = neutron, etc.: see code in 5.1).
Input only, cannot be modified.
XA,YA,ZA : current particle position
MREG : current geometry region
RULL : amount to be deposited (unweighted)
LLO : particle generation. Input only, cannot be modified.
ICALL : internal code calling flag (not for general use)
This function is activated by option USERWEIG, (p. 244) with WHAT(6) > 0.0. Energy and star densities
obtained via SCORE (p. 226) and USRBIN(p. 249), energy and stars obtained via EVENTBIN (p. 124) and
production of residual nuclei obtained via RESNUCLEi (p. 218) are multiplied by the value returned by
this function. The user can implement any desired logic to differentiate the returned value according to
any information contained in the argument list (particle type, position, region, amount deposited, particle
generation), or information available in COMMON SCOHLP (binning number, type of scored quantity). The
scored quantity is given by the flag ISCRNG (in SCOHLP):
The binning/detector number is given by JSCRNG (in SCOHLP) and is printed in output between the estimator
type and the detector name:
Note that an detector of residual nuclei can have the same JSCRNG number as a binning (use the value
of ISCRNG to discriminate). Further information can be obtained including COMMON TRACKR (for instance
particle’s total energy, direction cosines, age). TRACKR contains also special user variables (both integer
and in double precision) which can be used to save information about particles which have undergone some
particular event. If data concerning the current material are needed, it can be accessed as MEDIUM(MREG) if
file (FLKMAT) is included. Indeed, a common simple application of COMSCW is to score dose according to the
local density (especially useful to get the correct average dose in bins straddling a boundary between two
different media):
..................
INCLUDE ’(FLKMAT)’
INCLUDE ’(SCOHLP)’
..................
DFFCFF 351
Note that the variables in the argument list, with the exception of IJ, LLO and ICALL, are local copies
of those used for particle transport, and therefore can be modified to have an effect on scoring, without
affecting transport.
If name-based input is being used, the name corresponding to MREG can be obtained via a call to routine
GEOR2N:
CALL GEOR2N (NUMREG, NAMREG, IERR)
where NUMREG (input variable) is the region number, and NAMREG (returned variable) is the corresponding
region name (to be declared as CHARACTER*8). IERR is a returned error code: if = 0 the conversion is
successful. See example in the description of BXDRAW below.
Note: setting the variable LSCZER = .TRUE. before RETURN (LSCZER is in COMMON SCOHLP), will cause zero
scoring whatever the value returned by COMSCW. This is more efficient than returning a zero value.
Function DFFCFF returns a user-defined diffusion coefficient for optical photons. It is activated by setting
WHAT(3) < -99 in command OPT–PROP, with SDUM = blank. See Sec. 7.51 and Chap. 12 for more infor-
mation.
Argument list
IJ : particle type (input only)
NTRUCK : number of step points (input only)
XTRUCK, YTRUCK, ZTRUCK : particle step points, can be modified by user
MREG : region number (input only)
LLO : particle generation (input only)
ICALL : internal code calling flag (not for general use)
Subroutine ENDSCP allows to shift by a user-defined distance the energy which is being deposited along a step
or several step binning portions, by providing new segment endpoints. A typical application is to simulate
an instrument drift.
If name-based input is being used, the name corresponding to MREG can be obtained via a call to routine
GEOR2N:
352 FLUSCW
where NUMREG (input variable) is the region number, and NAMREG (returned variable) is the corresponding
region name (to be declared as CHARACTER*8). IERR is a returned error code: if = 0 the conversion is
successful. See example in the description of BXDRAW below.
Argument list
IJ : particle type (input only)
PLA : particle momentum (if > 0), or kinetic energy (if < 0) (input
only)
TXX, TYY, TZZ : particle direction cosines, can be modified by user
NTRUCK : number of step points (input only)
XTRUCK, YTRUCK, ZTRUCK : particle step points, can be modified by user
NRGFLK : new region number (input only)
IOLREG : old region number (input only)
LLO : particle generation (input only)
ICALL : internal code calling flag (not for general use)
Subroutine FLDSCP allows to shift by a user-defined distance the track whose length is being scored as fluence
along a step or several step binning portions, by providing new segment endpoints. A typical application is
to simulate an instrument drift.
If name-based input is being used, the names corresponding to MREG and IOLREG can be obtained via a call
to routine GEOR2N:
CALL GEOR2N (NUMREG, NAMREG, IERR)
where NUMREG (input variable) is the region number, and NAMREG (returned variable) is the corresponding
region name (to be declared as CHARACTER*8). IERR is a returned error code: if = 0 the conversion is
successful. See example in the description of BXDRAW below.
Argument list:
IJ : particle type (input only, cannot be modified)
PLA : particle momentum (if > 0.0)
or -PLA = kinetic energy (if < 0.0)
TXX, TYY, TZZ : particle current direction cosines
WEE : particle weight
XX, YY, ZZ : particle position
NRGFLK : current region (after boundary crossing)
IOLREG : previous region (before boundary crossing). Useful only with
boundary crossing estimators (for other estimators it has no
meaning)
LLO : particle generation (input only, cannot be modified)
NSURF : internal code calling flag (not for general use)
Function FLUSCW is activated by option USERWEIG (p. 244), with WHAT(3) > 0.0. Yields obtained via
USRYIELD (p. 265), fluences calculated with USRBDX, USRTRACK, USRCOLL, USRBIN (respectively p. 246,
262, 257, 249), and currents calculated with USRBDX are multiplied by the value returned by this function.
The user can implement any desired logic to differentiate the returned value according to any information
contained in the argument list (particle type, energy, direction, weight, position, region, boundary, particle
FORMFU 353
generation), or information available in COMMON SCOHLP (binning or detector number, estimator type). The
estimator type is given by the flag ISCRNG (in COMMON SCOHLP):
ISCRNG = 1 −→ Boundary crossing estimator
ISCRNG = 2 −→ Track-length binning
ISCRNG = 3 −→ Track-length estimator
ISCRNG = 4 −→ Collision density estimator
ISCRNG = 5 −→ Yield estimator
The binning/detector number is given by JSCRNG (in COMMON SCOHLP) and is printed in output:
Bdrx n. 2 "bdxname" , generalised particle n. 8, from region n. 22 to region n. 78
Track n. 6 "trkname" , generalised particle n. 14, region n. 9
Note that a track-length detector can have the same JSCRNG number as a boundary crossing one or a binning
etc. (use the value of ISCRNG to discriminate the different estimators). Further information can be obtained
including COMMON TRACKR (for instance particle age). TRACKR contains also special user variables (both integer
and in double precision) which can be used to save information about particles which have undergone some
particular event.
Function FLUSCW has many applications. A common one is conditional scoring (score only if within a certain
distance from a point, etc.): for instance it is possible to implement a sort of 2-dimensional fluence binning
on a plane boundary.
Other interesting applications are based on the fact that FLUSCW is called at every boundary crossing, provided
that at least one USRBDX detector has been requested. Although the function has been designed mainly
to weight scored quantities, it can be “cheated” to do all sorts of side things, even not directly connected
with scoring. Note that the variables in the argument list, with the exception of IJ, LLO and NSURF, are
local copies of those used for particle transport, and therefore can be modified to have an effect on scoring,
without affecting transport.
If name-based input is being used, the name corresponding to NREG and IOLREG can be obtained via a call
to routine GEOR2N:
CALL GEOR2N (NUMREG, NAMREG, IERR)
where NUMREG (input variable) is the region number, and NAMREG (returned variable) is the corresponding
region name (to be declared as CHARACTER*8). IERR is a returned error code: if = 0 the conversion is
successful. See example in the description of BXDRAW below.
Note: setting the variable LSCZER = .TRUE. before RETURN (LSCZER is in COMMON SCOHLP), will cause zero
scoring whatever the value returned by COMSCW or FLUSCW. This is more efficient than returning a zero value.
Function FORMFU can be used to override the standard value of the nuclear charge form factor. It must
return the squared value of the nuclear charge form factor for particle IJ. The default version computes the
form factor in Born approximation for a medium of given composition, using the simple expression given by
Tsai [201], and accounts also for the contribution of incoherent scattering.
The function is called by the multiple and single scattering routines if option MULSOPT (p. 174) has been
issued with WHAT(3) < 0.0 for electrons and positrons, or WHAT(2) < 0.0 for hadrons and muons. See
Note 2 to option MULSOPT, p. 176.
354 MUSRBR, LUSRBL, FUSRBV
Function FRGHNS can be used to return a non-zero value for the roughness of a boundary between two
materials, relevant for optical photon transport (default roughness is zero for all boundaries). Meaningful
only if options OPT–PROP or OPT–PROD (p. 187, 183) have been requested. See Sec. 7.51 and Chap. 12 for
more information.
If name-based input is being used, the names corresponding to MREG and NEWREG can be obtained via a call
to routine GEOR2N:
where NUMREG (input variable) is the region number, and NAMREG (returned variable) is the corresponding
region name (to be declared as CHARACTER*8). IERR is a returned error code: if = 0 the conversion is
successful. See example in the description of BXDRAW below.
These three functions are used to define 3-dimensional fluence distributions to be calculated by special
user-defined binnings (see option USRBIN with WHAT(1) = 8.0 in the first card).
If name-based input is being used, the name corresponding to MREG can be obtained via a call to routine
GEOR2N:
where NUMREG (input variable) is the region number, and NAMREG (returned variable) is the corresponding
region name (to be declared as CHARACTER*8). IERR is a returned error code: if = 0 the conversion is
successful. See example in the description of BXDRAW below.
LUSRBL defines another discrete (integer) variable (by default: lattice number)
LATTIC 355
The 3 functions are called at track-length events. What is scored is the particle track-length multiplied by
the particle’s weight, possibly modified by a user-written FLUSCW user routine (see 13.2.6).
Subroutine LATTIC is activated by one or more LATTICE cards in the geometry input (see 8.2.10). It is
expected to transform coordinates and direction cosines from any lattice cell (defined by card LATTICE) to
the reference system in which the basic structure has been described.
The user is expected to provide a transformation of coordinates and vector direction cosines from each lattice
cell to the corresponding basic structure (in ENTRY LATTIC) and of direction cosines from the basic structure
to each corresponding lattice cell (in ENTRY LATNOR).
Entries:
LATTIC (position and direction symmetry transformation from lattice cell to prototype structure)
Argument list
XB(1), XB(2), XB(3) : actual physical position coordinates in IRLTGG
lattice cell
WB(1), WB(2), WB(3) : actual physical direction cosines in IRLTGG lat-
tice cell
DIST : current step length
SB(1), SB(2), SB(3) : transformed coordinates in prototype cell
UB(1), UB(2), UB(3) : transformed cosines in prototype cell
IR : region number in prototype cell
IRLTGG : lattice cell number
IRLT : array containing region indices corresponding to lattice cells
IFLAG : reserved variable
LATTIC returns the tracking point coordinates (SB) and direction cosines (UB) in the reference prototype
geometrical structure, corresponding to real position/direction XB, WB in the actual cell IRLTGG (defined as
input region IR by a LATTICE card, see 8.2.10).
356 MAGFLD
When the lattice option is activated, the tracking proceeds in two different systems: the “real” one, and that
of the basic symmetry unit. Particle positions and directions are swapped from their real values to their
symmetric ones in the basic cell, to perform the physical transport in the regions and materials that form
the prototype geometrical structure and back again to the real world. The correspondence between “real”
and “basic” position/direction depends on the symmetry transformation and on the lattice cell number.
LATNOR (LATtice cell NORmal transformation from prototype structure to lattice cell)
Argument list
UN(1), UN(2), UN(3) : direction cosines of the vector normal to the
surface, in the prototype cell (entry values) and in the lattice cell
(returned values)
IRLTNO : present lattice cell number
ENTRY LATNOR transforms the direction cosines stored in the vector UN(3) from the system of the basic
prototype unit to that of the real world in lattice cell number IRLTNO. Therefore, this cosine transformation
must be the inverse of that performed on the cosines by the LATTIC entry: but while LATTIC maps vector
UB to a different vector WB, LATNOR maps the UN vector to itself.
Note that if the transformation implies a rotation, it is necessary to save first the incoming UN cosines to
local variables, to avoid overwriting the vector before all transformation statements are executed.
Notes
1) Different symmetry transformations can of course be implemented in the same LATTIC routine (each being
activated by a different cell number or range of cell numbers).
2) The advantage of the lattice geometry is to avoid describing in detail the geometry of repetitive multi-modular
structures. It must be realised, however, that a penalty is generally paid in computer efficiency.
3) Also, a region contained in the prototype cell and all those “mapped” to it inside lattice cells are treated by
the program as if they were connected with “non-overlapping ORs” (see 8.2.7.2, 8.2.7.4) into a single region.
Therefore, any region-based scoring (options SCORE, USRTRACK, etc.) can only provide quantities averaged
over the whole structure. More detailed information must be obtained by region-independent options such as
USRBIN or by user-written routines (MGDRAW , see 13.2.13). The USRBIN and EVENTBIN options (p. 249, 124),
with WHAT(1) = 8, can also be used to request a special binning type which activates the MUSRBR, LUSRBL,
FUSRBV , user routines to recover lattice information (see 13.2.9).
4) A transformation between a lattice cell and a prototype region can alternatively be defined without resorting
to the LATTIC user routine. In this case, the transformation is defined via a ROT-DEFIni card (p. 221) and
the correspondence is established by giving the transformation index in the SDUM of the LATTICE card (see
8.2.10, p. 297).
Argument list
X, Y, Z : current position (input only)
BTX, BTY, BTZ : direction cosines of the magnetic field vector (returned)
B : magnetic field intensity in tesla (returned)
NREG : current region (input only)
IDISC : if returned = 1, the particle will be discarded
MAGFLD is activated by option MGNFIELD (p. 172) with WHAT(4–6) = 0.0 and is used to return intensity and
direction of a magnetic field based on the current position and region. It is called only if the current region
has been flagged as having a non-zero magnetic field by option ASSIGNMAt (p. 66), with WHAT(5) = 1.0.
MDSTCK 357
The magnetic field spatial distribution is often read and interpolated from an external field map.
Note that in any case the direction cosines must be properly normalised in double precision (e.g.,
BTX = SQRT(ONEONE - BTY**2 - BTZ**2)), even if B = 0.0.
If name-based input is being used, the name corresponding to MREG can be obtained via a call to routine
GEOR2N:
where NUMREG (input variable) is the region number, and NAMREG (returned variable) is the corresponding
region name (to be declared as CHARACTER*8). IERR is a returned error code: if = 0 the conversion is
successful. See example in the description of BXDRAW below.
Argument list
IFLAG : type of nuclear interaction which has produced secondaries:
1: inelastic
2: elastic
3: low-energy neutron
MDSTCK is called after a nuclear interaction in which at least one secondary particle has been produced, before
any biasing is applied, to decide which secondary will be loaded in the main stack for further transport. The
properties of the secondaries are stored in the secondary stack (COMMON GENSTK). With MDSTCK, users can
analyse those secondaries, write them to a file, or even modify the content of GENSTK (for instance applying
their own biasing). In the latter case, however, it is their responsibility to make sure that energy is conserved,
the various physical quantities are still consistent, etc.
Subroutine MGDRAW, activated by option USERDUMP (p. 241) with WHAT(1) ≥ 100.0, usually writes a
“collision tape”, i.e., a file where all or selected transport events are recorded. The default version (unmodified
by the user) offers several possibilities, selected by WHAT(3) in USERDUMP. Details are given in Chap. 11.
Additional flexibility is offered by a user entry USDRAW, interfaced with the most important physical events
happening during particle transport. The user can modify of course also any other entry of this subroutine
(BXDRAW called at boundary crossings, EEDRAW called at event end, MGDRAW for trajectory drawing, ENDRAW
for recording of energy depositions and SODRAW for recording of source events): for instance the format of
the output file can be changed, and different combinations of events can be written to file.
No information is written by default at EEDRAW and BXDRAW calls, but the entries are called for any
value of WHAT(3) < 7.0 in USERDUMP (EEDRAW also for WHAT(4) ≥ 1).
But the most interesting aspect of the routine is that the six entries (all of which, if desired, can be activated
at the same time by setting USERDUMP with WHAT(3) = 0.0 and WHAT(4) ≥ 1.0) constitute a complete
interface to the whole Fluka transport. Therefore, MGDRAW can be used not only to write a collision tape,
358 MGDRAW
but to do any kind of complex analysis (for instance studying correlations between events).
Entries:
MGDRAW (trajectory dumping for drawing)
MGDRAW writes by default, for each trajectory, the following variables (contained in COMMON TRACKR):
NTRACK : number of track segments
MTRACK : number of continuous energy deposition events along the track. Local energy deposition events,
i.e., energy deposition at a point, such as that from heavy recoils, particles below threshold and
low energy neutron kerma, are written instead by the entry ENDRAW (see below)
JTRACK : type of particle
ETRACK : total energy of the particle
WTRACK : weight of the particle
NTRACK values of XTRACK, YTRACK, ZTRACK: end of each track segment
MTRACK values of DTRACK: energy deposited at each deposition event
CTRACK : total length of the curved path
Other variables are available in TRACKR (but not written by MGDRAW unless the latter is modified by the user:
particle momentum, direction cosines, cosines of the polarisation vector, age, generation, etc. (see a full list
in the comment in the INCLUDE file).
If name-based input is being used, the name corresponding to MREG can be obtained via a call to routine
GEOR2N:
CALL GEOR2N (NUMREG, NAMREG, IERR)
where NUMREG (input variable) is the region number, and NAMREG (returned variable) is the corresponding
region name (to be declared as CHARACTER*8). IERR is a returned error code: if = 0 the conversion is
successful. See example in the description of BXDRAW below.
BXDRAW is called at each boundary crossing (if requested by the user with USERDUMP, WHAT(3) < 7.0). There
is no default output: any output must be supplied by the user.
If name-based input is being used, the names corresponding to MREG and NEWREG can be obtained via a call
to routine GEOR2N:
CALL GEOR2N (NUMREG, NAMREG, IERR)
MGDRAW 359
where NUMREG (input variable) is a region number, and NAMREG (returned variable) is the corresponding region
name (to be declared as CHARACTER*8). IERR is a returned error code: if = 0 the conversion is successful.
Example:
.......................................
CHARACTER*8 MRGNAM, NRGNAM
.......................................
ENTRY BXDRAW ( ICODE, MREG, NEWREG, XSCO, YSCO, ZSCO )
CALL GEOR2N ( MREG, MRGNAM, IERR1 )
CALL GEOR2N ( NEWREG, NRGNAM, IERR2 )
IF(IERR1 .NE. 0 .OR. IERR2 .NE. 0) STOP "Error in name conversion"
.......................................
IF(MRGNAM .EQ. "MyUpsREG" .AND. NRGNAM .EQ. "MyDwnREG") THEN
.......................................
EEDRAW is called at the end of each event, or primary history, (if requested by the user with USERDUMP,
WHAT(3) ≤ 0.0). There is no default output: any output must be supplied by the user.
In those cases the ID of the particle originating the interaction is saved in the TRACKR variable J0TRK (which
otherwise has value zero)
XSCO, YSCO, ZSCO, RULL : see argument list.
If name-based input is being used, the name corresponding to MREG can be obtained via a call to routine
GEOR2N:
where NUMREG (input variable) is the region number, and NAMREG (returned variable) is the corresponding
region name (to be declared as CHARACTER*8). IERR is a returned error code: if = 0 the conversion is
successful. See example in the description of BXDRAW above.
Argument list
No arguments
USDRAW is called after each particle interaction (if requested by the user with option USERDUMP,
WHAT(4) ≥ 1.0). There is no default output: any output must be supplied by the user.
Information about the secondary particles produced is available in COMMON GENSTK, except that concerning
delta rays produced by heavy ions (in which case the properties of the single electron produced are available
in COMMON EMFSTK, with index NP). Another exception is that about heavy evaporation fragments (deuterons,
3
H, 3 He, α, with JTRACK ID equal respectively to -3, -4, -5, -6) and fission/fragmentation products generated
in an inelastic interaction (with JTRACK = -7 to -12), which are all stored in COMMON FHEAVY, with index
NPHEAV. To get the kinetic energy of particles with JTRACK < -6), one must subtract from their total energy
(ETRACK in COMMON TRACKR) their fully stripped nuclear mass (AMNHEA in COMMON FHEAVY).
Information about the interacting particle and its trajectory can be found in COMMON TRACKR (see description
under the MGDRAW entry above). In TRACKR there are also some spare variables at the user’s disposal:
LLOUSE (integer), ISPUSR (integer array) and SPAUSR (double precision array). Like many other TRACKR
variables, each of them has a correspondent in the particle stacks, i.e., the COMMONs from which the particles
are unloaded at the beginning of their transport: FLKSTK, EMFSTK and OPPHST (respectively, the stack of
hadrons/muons, electrons/photons, and optical photons). The correspondence with TRACKR is shown below
under STUPRF/STUPRE (13.2.20). When a particle is generated, its properties (weight, momentum, energy,
coordinates etc., as well as the values of the user flags) are loaded into one of the stacks. The user can write
a STUPRF or STUPRE subroutine (see description below in 13.2.20) to change anyone of such flags just before
it is saved in stack.
When a particle starts to be transported, its stack variables are copied to the corresponding TRACKR ones.
Unlike the other TRACKR variables, which in general become modified during transport due to energy loss,
scattering etc., the user flags keep their original value copied from stack until they are changed by the user
himself (generally under the USDRAW entry).
One common application is the following: after an interaction which has produced secondaries, let USDRAW
copy some properties of the interacting particle into the TRACKR user variables. When STUPRF is called next
362 RFLCTV
to load the secondaries into stack, by default it copies the TRACKR user variables to the stack ones. In this
way, information about the parent can be still carried by its daughters (and possibly by further descendants).
This technique is sometimes referred to as “latching”.
If name-based input is being used, the name corresponding to MREG can be obtained via a call to routine
GEOR2N:
CALL GEOR2N (NUMREG, NAMREG, IERR)
where NUMREG (input variable) is the region number, and NAMREG (returned variable) is the corresponding
region name (to be declared as CHARACTER*8). IERR is a returned error code: if = 0 the conversion is
successful. See example in the description of BXDRAW above.
Subroutine OPHBDX sets the optical properties of a boundary surface. The call is activated by command
OPT–PROP, with SDUM = SPEC–BDX. See Sec. 7.51 and Chap. 12 for more information.
If name-based input is being used, the name corresponding to MREG and NEWREG can be obtained via a call
to routine GEOR2N:
CALL GEOR2N (NUMREG, NAMREG, IERR)
where NUMREG (input variable) is the region number, and NAMREG (returned variable) is the corresponding
region name (to be declared as CHARACTER*8). IERR is a returned error code: if = 0 the conversion is
successful. See example in the description of BXDRAW above.
Function QUEFFC returns a user-defined quantum efficiency for an optical photon of the given wave-
length or frequency.
It is activated with option OPT–PROP with SDUM = SENSITIV, by setting the 0th photon sensitivity param-
eter to a value < -99. See Sec. 7.51 and Chap. 12 for more information.
Function RFLCTV returns a user-defined of the current material for an optical photon of the given
wavelength or frequency..
It is activated by command OPT–PROP with SDUM = METAL and WHAT(3) < -99. See Sec. 7.51 and Chap. 12
for more information.
Function RFRNDX returns a user-defined refraction index of the current material for an optical photon
of the given wavelength or frequency.
It is activated by command OPT–PROP with SDUM = blank and WHAT(1) < -99. See Sec. 7.51 and Chap. 12
for more information.
Argument list
No arguments
Subroutine SOEVSV is always called after a beam particle is loaded onto stack, but a call to SOEVSV can be
inserted by the user anywhere in a user routine.
SOEVSV copies the whole COMMON FLKSTK to another COMMON, SOUEVT, which can be included in other user
routines. In other words, this routine is used to “take a snapshot” of the particle bank at a particular time
for further use (interfacing to independent generators, etc.)
Argument list
NOMORE : if set = 1, no more calls will occur (the run will be terminated
after exhausting the primary particles loaded onto stack in the
present call). The history number limit set with option START
(p. 231) will be overridden
Subroutine SOURCE is probably the most frequently used user routine. It is activated by option SOURCE
(p. 228) and is used to sample primary particle properties from distributions (in space, energy, time, direction
or mixture of particles) too complicated to be described with the BEAM, BEAMPOS and BEAMAXES cards
alone. For each phase-space variable, a value must be loaded onto COMMON FLKSTK (particle bank) before
returning control. These values can be read from a file, generated by some sampling algorithm, or just
assigned.
Reading from a file is needed, for instance, when the particle data are taken from a collision file, written
by Fluka or by another program (see 11). The user must open the file with a unit number > 20.0 (unit
numbers lower than 20 are reserved), in one of the following ways:
Then, a READ statement in SOURCE can be used to get the data to load in stack, for instance:
READ(21,*) IPART, X, Y, Z, COSX, COSY, COSZ, ENERGY, WEIGHT
ILOFLK (NPFLKA) = IPART
XFLK (NPFLKA) = X
YFLK (NPFLKA) = Y
ZFLK (NPFLKA) = Z
TXFLK (NPFLKA) = COSX
. . . etc. . .
(NPFLKA is the current stack index).
or implicitly, leaving unmodified values input with BEAM (p. 71) or BEAMPOS (p. 76):
PMOFLK (NPFLKA) = PBEAM
(PBEAM is the momentum value input as WHAT(1) in option BEAM). A set of direct assignments, one for each
of several different stack entries, can be useful, for example, to define a series of RAYs through the geometry
(see 14):
DO 10 I = 1, 20
NPFLKA = NPFLKA + 1
ILOFLK (NPFLKA) = 0 ! (0 is the RAY particle id number)
XFLK (NPFLKA) = 500.D0 + DBLE(I) * 40.D0
YFLK (NPFLKA) = 200.D0
...etc...
10 CONTINUE
To sample from a uniform distribution, the user must use the function FLRNDM(DUMMY), which returns a
double precision pseudo-random number uniformly distributed between 0 (included) and 1 (not included).
Actually, DUMMY can be any variable name. A simple example of sampling from a uniform distribution is
that of a linear source along the Z axis, between Z = 10 and Z = 80:
Z1 = 10.D0
Z2 = 80.D0
ZFLK (NPFLKA) = 10.D0 + (Z2 - Z1) * FLRNDM(XXX)
One way to sample a value X from a generic distribution f (x) is the following.
First integrate the distribution function, analytically or numerically, and normalise to 1 the obtained cumu-
lative distribution: Rx
f (x)dx
F (x) = R xxmin
max
xmin
f (x)dx
Then, sample a uniform pseudo-random number ξ using FLRNDM and get the desired result by finding the
inverse value X = F −1 (ξ) (analytically or most often by interpolation).
The technique for sampling from a generic distribution described above can be extended to modify the
probability of sampling in different parts of the interval (importance sampling). We replace f (x) by a
weighted function g(x) = f (x) h(x), where h(x) is any appropriate function of x we like to choose. We
normalise g(x) in the same way as f (x) before:
Rx Rx
xmin
g(x)dx f (x)dx h(x)dx
G(x) = R xmax = xmin
x
g(x)dx B
min
and we need also the integral of f (x) over the whole interval:
Z xmax
A = f (x)dx
xmin
All the sampling is done using the biased cumulative normalised function G instead of the original unbiased
F : we sample a uniform pseudo-random number ξ as before, and we get the sampled value X by inverting
G(x):
X = G−1 (ξ)
B
The particle is assigned a weight .
A h(X)
A special case of importance sampling is when the biasing function chosen is the inverse of the unbiased
distribution function:
1
h(x) =
f (x)
g(x) = f (x) h(x) = 1
Z xmax Z xmax
B = g(x)dx = dx = xmax − xmin
xmin xmin
x − xmin
G(x) =
xmax − xmin
In this case we sample a uniform pseudo-random number t using FLRNDM as shown above. The sampled value
X is simply given by:
X = xmin + (xmax − xmin ) t
and the particle is assigned a weight
B xmax − xmin
= f (X) R xmax
A h(X) x
f (x)dx
min
But since Fluka normalizes all results per unit primary weight, any constant factor is eliminated in the
normalization. Therefore it is sufficient to assign each particle a weight f (x).
Because X is sampled with the same probability over all possible values of x, independently of the value f (X)
of the function, this technique is used to ensure that sampling is done uniformly over the whole interval,
even though f (x) might have very small values somewhere. For instance it may be important to avoid
undersampling in the high-energy tail of a spectrum, steeply falling with energy but more penetrating, such
as that of cosmic rays or synchrotron radiation.
Option SOURCE (p. 228) allows the user to input up to 12 numerical values (WHASOU(1),(2). . . (12)) and one
8-character string (SDUSOU) which can be accessed by the subroutine by including the following line:
366 SOURCE
INCLUDE ’(SOURCM)’
These values can be used as parameters or switches for a multi-source routine capable to handle several
cases, or to identify an external file to be read, etc., without having to compile and link again the routine.
In the SOURCE routine there are a number of mandatory statements, (clearly marked as such in accompanying
comments) which must not be removed or modified. The following IF block initialises the total kinetic energy
of the primary particles and sets two flags: the first to skip the IF block in all next calls, and the second to
remind the program, when writing the final output, that a user source has been used:
* +-------------------------------------------------------------------*
* | First call initialisations:
IF ( LFIRST ) THEN
* | *** The following 3 cards are mandatory ***
TKESUM = ZERZER
LFIRST = .FALSE.
LUSSRC = .TRUE.
* | *** User initialisation ***
END IF
* |
* +-------------------------------------------------------------------*
The user can insert into the above IF block any other initialisation needed, for instance the preparation
of a cumulative spectrum array from which to sample the energy of the source particles. Note that user
initialisation can take place also in routines USRINI and USRGLO (activated at input time by input options
USRICALL (p. 260) and USRGCALL (p. 259), see 13.2.27 and 13.2.26, and USREIN (called before unloading
from stack the first source particle of an event, i.e., just after the call to SOURCE: see 13.2.24)
At the time SOURCE is called, the particle bank FLKSTK is always empty and the stack pointer NPFLKA has
value 0.
The user can load onto the FLKSTK stack one or more source particles at each call: for each particle loaded
the pointer must be increased by 1. The template version of SOURCE loads only one particle: if several are
loaded the following sequence, until the statement CALL SOEVSV not included, must be repeated once for
each particle, possibly inside a DO loop:
NPFLKA = NPFLKA + 1 ! increases the pointer
The following statements assign a value to each of the FLKSTK stack variables concerning the particle being
loaded.
The above statement is followed by several others that must not be changed or removed. In the template
routine, they are encompassed by the comment lines:
From this point .... / ... to this point: don’t change anything
SOURCE 367
The following statements can be overridden or rewritten by the user, assigning new values or sampling them
from problem-dependent distributions.
Then the most frequently changed lines: both energy and momentum of the particle must be loaded onto the
FLKSTK stack, but the two cannot be defined independently. Appropriate kinematical (relativistic) relations
must be applied to derive one from the other.
In the template routine, the momentum is assumed to be assigned by BEAM option (its value, PBEAM, is
taken from COMMON BEAMCM , which contains all values defined by options BEAM and BEAMPOS).
PMOFLK (NPFLKA) = PBEAM
(where AM is the rest mass, in COMMON PAPROP, and IJBEAM is the particle type, in COMMON BEAMCM).
If instead the energy had been sampled first from some spectrum, and ENSAMP would be the sampled value,
the two statements above would become:
TKEFLK (NPFLKA) = ENSAMP
PMOFLK (NPFLKA) = SQRT(ENSAMP * (ENSAMP + TWOTWO * AM(IJBEAM)))
(If BEAMPOS is not given, by default UBEAM = VBEAM = 0.0, WBEAM = 1.0)
Remember to make sure that the cosines are normalised! One could replace the last statement by:
TZFLK (NPFLKA) = SQRT ( ONEONE - TXFLK(NPFLKA)**2 - TYFLK(NPFLKA)**2 )
but appropriate values need to be given in some cases, for instance in synchrotron radiation shielding
problems.
Finally the particle coordinates, set again by default equal to those input with BEAMPOS:
XFLK (NPFLKA) = XBEAM ! Assumed here to be the same as
YFLK (NPFLKA) = YBEAM ! defined by option BEAMPOS. XBEAM,
ZFLK (NPFLKA) = ZBEAM ! YBEAM, ZBEAM are also in COMMON BEAMCM
If for example our problem required instead a linear source uniformly distributed along Z between Z1 and
Z2, we could replace the last statement by:
ZFLK (NPFLKA) = Z1 + FLRNDM(UGH) * (Z2 - Z1)
The following lines in the template SOURCE routine should never be changed. They calculate the total energy
of the primary particles, define the remaining properties of the particles (starting region and lattice cell) and
do some geometry initialisation.
The last line calls the SOEVSV user routine (see description in 13.2.18 above) to save the stack for possible
further use.
Important Notes
1) Even though a user-written source is used, it is recommended to issue in input a card BEAM with an energy
larger than the maximum energy that can be sampled by the user source. This value is used to set up tables
such as stopping powers, cross sections etc., and if not provided, crashes can occur.
2) The values of beam characteristics defined by commands BEAM (p. 71) and POLARIZAti (p. 212) are available in
COMMON BEAMCM: the angular divergence (variable DIVBM), beam width (XSPOT and YSPOT), and the polarisation
vector (UBMPOL, VBMPOL, WBMPOL) can help to set up a scheme to sample the corresponding quantities from
user-defined distributions. But sampling from the distributions pre-defined by BEAM and POLARIZAti is not
simply inherited by subroutine SOURCE: it is the responsibility of the user to write such a scheme!
For this task, it may be useful to define a “beam reference frame” by means of option BEAMAXES (see more
details on p. 74).
13.2.20 STUPRE, STUPRF: SeT User PRoperties for Emf and Fluka particles
These two functions are used to assign a value to one or more stack user variables when the corresponding particle
is loaded onto one of the stacks (FLKSTK for hadrons/muons, and EMFSTK for electrons/photons.
In each of these stacks the user has access to one integer variable, one integer array and one double precision array.
Each of them is copied to a correspondent variable or array in COMMON TRACKR at the beginning of transport:
The user can access and modify the TRACKR variables via subroutine MGDRAW and its entries ENDRAW, SODRAW and
especially USDRAW (see description above). STUPRF and STUPRE can be used to do the reverse, namely to copy TRACKR
user variables to those of the relevant stack (see USDRAW above).
Note that a stack OPPHST exists also for optical photons, containing similar user variables and arrays LOUOPP,
ISPORK and SPAROK. They can be used in user routines, but they are not handled by STUPRE or STUPRF.
STUPRE is called before loading into stack electrons, positrons and photons.
Argument list
No arguments
STUPRE, STUPRF 369
The default version does nothing (the user variables of the parent particle are already set equal to the original
projectile by the various electromagnetic interaction routines. Also the region/position etc. are already set inside the
stack arrays.
STUPRF is called before loading into stack hadrons, muons, neutrinos and low-energy neutrons
Argument list
IJ : type of the parent particle
MREG : current region
XX, YY, ZZ : particle position
NPSECN : index in COMMON GENSTK of the secondary being loaded onto stack
NPPRMR : if > 0, the secondary being loaded is actually still the interacting
particle (it can happen in some biasing situations)
The default version copies to stack the user flags of the parent.
If name-based input is being used, the name corresponding to MREG can be obtained via a call to routine GEOR2N:
where NUMREG (input variable) is the region number, and NAMREG (returned variable) is the corresponding region name
(to be declared as CHARACTER*8). IERR is a returned error code: if = 0 the conversion is successful. See example in
the description of BXDRAW above.
370 UBSSET
Argument list
IR : region number
RRHADR : multiplicity biasing factor to be applied to the secondaries from
hadronic interactions in region IR (WHAT(2) of card BIASING, p. 80)
HMPHAD : Importance of region IR for hadrons and muons (WHAT(3) of card BI-
ASING, with WHAT(1) = 0.0 or 1.0). Actually the routine argument
is an integer, IMPHAD, equal to importance multiplied by 10000, but the
user should consider only the double precision version HMPHAD (a con-
version from and to the integer version is provided at the beginning
and at the end of the routine, and should not be changed)
HMPLOW : Importance of region IR for low-energy neutrons (WHAT(3) of card BI-
ASING, with WHAT(1) = 0.0 or 3.0). Actually the routine argument
is an integer, IMPLOW, equal to importance multiplied by 10000, but the
user should consider only the double precision version HMPLOW (a con-
version from and to the integer version is provided at the beginning
and at the end of the routine, and should not be changed)
HMPEMF : Importance of region IR for electrons and photons (WHAT(3) of card
BIASING, with WHAT(1) = 0.0 or 2.0). Actually the routine argu-
ment is an integer, IMPEMF, equal to importance multiplied by 10000,
but the user should consider only the double precision version HMPEMF
(a conversion from and to the integer version is provided at the begin-
ning and at the end of the routine, and should not be changed)
IGCUTO : Cutoff group index for low-energy neutrons in region IR (WHAT(1) in
card LOW–BIAS, p. 154)
IGNONA : Non-analogue absorption group limit for low-energy neutrons in region
IR (WHAT(2) in card LOW–BIAS)
PNONAN : Non-analogue survival probability for low-energy neutrons in region IR
(WHAT(3) in card LOW–BIAS)
IGDWSC : Group limit for biased downscattering for low-energy neutrons in re-
gion IR (WHAT(1) in card LOW–DOWN, p. 156)
FDOWSC : Biased downscattering factor for low-energy neutrons in region IR
(WHAT(2) in card LOW–DOWN)
JWSHPP : Weight Window/importance profile index for low-energy neutrons in
region IR (SDUM in WW–FACTOr, p. 270)
WWLOW : Weight Window lower level in region IR (WHAT(1) in card
WW–FACTOr, possibly modified by WHAT(4) in WW–THRESh, p. 275
or WHAT(2) in WW–PROFIle, p. 273)
WWHIG : Weight-Window upper level in region IR (WHAT(2) in card
WW–FACTOr, possibly modified by WHAT(4) in WW–THRESh or
WHAT(2) in WW–PROFIle)
WWMUL : Weight-Window multiplicative factor applied to the two energy thresh-
olds defined with WW–THRESh, for region IR (WHAT(3) in card
WW–FACTOr)
EXPTR : Exponential transform parameter for region IR (WHAT(2) in card EX-
PTRANS, p. 130) (not implemented yet!!!!!!!! )
ELECUT : e+ , e− cutoff in region IR (WHAT(1) in card EMFCUT)
GAMCUT : Photon cutoff in region IR (WHAT(2) in card EMFCUT, p. 113)
LPEMF : Leading Particle Biasing flag in region IR (SDUM = LPBEMF in card
EMF–BIAS, p. 108, or WHAT(3) in card EMFCUT)
ELPEMF : Maximum e+ /e− energy for applying Leading Particle Biasing
(WHAT(2) in card EMF–BIAS with SDUM = LPBEMF)
PLPEMF : Maximum photon energy for applying leading particle biasing
(WHAT(3) in card EMF–BIAS with SDUM = LPBEMF)
Subroutine UBSSET does not require a special command to be activated: is always called several times for each region:
(once for every biasing option or suboption) after the end of input reading and before starting the calculations. The
default version is a dummy and does nothing. The user can replace it to override any biasing parameters specified in
input.
The UBSSET subroutine is used especially in cases with a large number of regions, because it allows to derive the
UDCDRL 371
biasing parameters from simple algorithms instead of entering each input value by hand. Choosing an appropriate
numbering scheme for the geometry regions can often facilitate the task.
For instance, assuming a simple slab geometry with an expected exponential hadron attenuation from region 3 to
region 20, each region being one half-value-layer thick, one could write the following in order to set importances that
would keep the hadron number about constant in all regions:
IF(IR .GE. 3 .AND. IR .LE. 20) HMPHAD = ONEONE * TWOTWO**(IR-3)
If name-based input is being used, the name corresponding to IR can be obtained via a call to routine GEOR2N:
CALL GEOR2N (NUMREG, NAMREG, IERR)
where NUMREG (input variable) is the region number, and NAMREG (returned variable) is the corresponding region name
(to be declared as CHARACTER*8). IERR is a returned error code: if = 0 the conversion is successful. See example in
the description of BXDRAW above.
13.2.22 UDCDRL: User defined DeCay DiRection biasing and Lambda (for ν only)
Argument list
input:
IJ : type of decaying particle
KPB : outgoing neutrino, the one direction biasing is asked for
NDCY : number of decay products
output:
UDCDRB, VDCDRB, WDCDRB : cosines of the preferential outgoing direction for the
neutrino
UDCDRL width of the distribution around the preferential direction
Function UDCDRL is used to bias the direction of a neutrino emitted by a decaying particle of type IJ event by event.
The preferential direction axis is returned in the UDCDRB, VDCDRB, WDCDRB variables, and the value returned by UDCDRL
is the λ for direction biasing : the zenith angle around the selected axis is sampled according to exp[(1 − cos(θ))/λ] .
Argument list
input:
MREG : region at the beginning of the step
NEWREG : region at the end of the step
output:
FIMP : returns the user-defined importance ratio between the position at the
end and at the beginning of the step
Subroutine USIMBS is activated by card BIASING (p. 80) with SDUM = USER. The routine is called at every particle
step. It can be used to implement any importance biasing scheme based on region number and on phase space
coordinates and other information provided by COMMON TRACKR .
Warning: The user must balance the very effective biasing power offered by the routine with the important demand
on CPU time due to the large number of calls.
If name-based input is being used, the names corresponding to MREG and NEWREG can be obtained via a call to routine
GEOR2N:
CALL GEOR2N (NUMREG, NAMREG, IERR)
372 USRINI
where NUMREG (input variable) is the region number, and NAMREG (returned variable) is the corresponding region name
(to be declared as CHARACTER*8). IERR is a returned error code: if = 0 the conversion is successful. See example in
the description of BXDRAW above.
Argument list
No arguments
Subroutine USREIN is called just before the first source particle of an event is unloaded from stack and begins to be
transported. An event is the full history of a group of related particles and their descendants. If primaries are loaded
into stack by the input option BEAM (p. 71), there is only one source particle per event; but there can be more if the
user routine SOURCE is used to load particles into stack. USREIN does not need any special command to be activated,
but the default version of USREIN does nothing: the user can write here any kind of initialisation.
13.2.25 USREOU: USeR Event OUtput (called at the end of each event)
Argument list
No arguments
Subroutine USREOU is called at the end of each event, namely after all event primary particles and their descendants
have been transported. (See USREIN above for a definition of an event).
USREOU does not need any special command to be activated, but the default version of USREOU does nothing: the user
can write here any kind of event analysis, output, etc.
Argument list
WHAT(1),(2),(3),(4),(5),(6) : user-provided numerical parameters
SDUM : user-provided character string (8 characters)
Subroutine USRGLO is called before any other initialisation is done by the program, provided a command USRGCALL
is present anywhere in the input. It can be used to do any kind of initialisation: reading and manipulating data from
one or more files, calling other private routines, etc.
The calling parameters can carry any kind of useful information or can be used as flags to choose between different
possible actions to be performed before any particle transport takes place.
Argument list
WHAT(1),(2),(3),(4),(5),(6) : user-provided numerical parameters
SDUM : user-provided character string (8 characters)
Subroutine USRINI is called every time a USRICALL (p. 260) card is read in input. It can be used to do any kind of
initialisation: reading and manipulating data from one or more files, calling other private routines, etc.
The calling parameters can carry any kind of useful information or can be used as flags to choose between different
possible actions to be performed before any particle transport takes place.
USRMED 373
Argument list
IJ : particle type
EKSCO : particle kinetic energy (GeV)
PLA : particle momentum (GeV/c)
WEE : particle weight
MREG : previous region number
NEWREG : current region number
XX, YY, ZZ : particle position
TXX, TYY, TZZ : particle direction
Subroutine USRMED is activated by option MAT–PROP (p. 165) with SDUM = USERDIRE, for one or more materials
indicated by the user. It is called every time a particle is going to be transported in one of the user-flagged materials.
1) MREG = NEWREG: the particle is going to move from a point inside the medium. The user is allowed to change only
the particle weight. Typical application: simulating attenuation of optical photons in an absorbing medium by
reducing the photon weight.
2) MREG 6= NEWREG: the particle is going to move from a point on a boundary between two different regions. The
user may change any of the following: particle weight, current region number, direction cosines.
Typical applications:
– simulating refraction, by changing the direction cosines so that the particle is still inside the same region. To do
this, one generally needs the direction cosines of the normal to the surface: TXNOR(NPFLKA), TYNOR(NPFLKA),
TZNOR(NPFLKA) (COMMON FLKSTK must be included).
– simulating reflection (albedo) at a boundary. The direction cosines must be modified according to some
reflection law or albedo angular distribution, and NEWREG must be set = MREG.
In both cases the weight can also be reduced to account for surface reflectivity or similar (if the particle is an optical
photon, the FRGHNS user function (13.2.8) can be called to establish a surface roughness.) Also, setting the weight
WEE to zero is a way to kill the particle.
If name-based input is being used, the names corresponding to MREG and NEWREG can be obtained via a call to routine
GEOR2N:
Argument list
WHAT(1),(2),(3),(4),(5),(6) : user-given numerical parameters
SDUM : user-given character string (8 characters)
Subroutine USROUT is called every time a USROCALL (p. 261) card is read in input. It is used to print special user-
written output in addition to the standard one provided by default.
The calling parameters can carry any kind of useful information or can be used as flags to choose between different
possible actions to be performed after all particle transport has taken place.
374 USRRNC
Argument list
IZ : Atomic number of the residual nucleus
IA : Mass number of the residual nucleus
IS : Isomeric state of the residual nucleus
X, Y, Z : particle position
MREG : current region
WEE : particle weight
ICALL : internal code calling flag (not for general use)
Subroutine USRRNC is called every time a residual nucleus is produced, if option USERWEIG has been requested with
WHAT(5) > 0.
If name-based input is being used, the name corresponding to MREG can be obtained via a call to routine GEOR2N:
CALL GEOR2N (NUMREG, NAMREG, IERR)
where NUMREG (input variable) is the region number, and NAMREG (returned variable) is the corresponding region name
(to be declared as CHARACTER*8). IERR is a returned error code: if = 0 the conversion is successful. See example in
the description of BXDRAW above.
Chapter 14
A RAY travels in a straight line at the speed of light without any physical interaction. At the starting point,
and at each boundary crossing, Fluka writes information on a binary (unformatted) file. The file logical unit number
is parameterised as LUNRAY in INCLUDE file (IOUNIT) (usually LUNRAY = 10).
An example of user program which can be used to retrieve the tracking information from the binary file is
given below. The meaning of the variables read is explained at the end:
* .................................................................
* user program or subroutine
* .................................................................
PARAMETER (LUNRAY = 10)
CHARACTER MATNAM*8, FILNAM*80
INTEGER NRAYRN, MREG, MLATTC, MMAT, MREGLD, MLATLD, MMATLD, IDISC
REAL EKIN, XX, YY, ZZ, R2, R3, THETAP, PHIPOS, TXX, TYY, TZZ,
& THETAD, PHIDIR, ETADIR, RCM, ALAMDI, ALAMDP, ALAMDN, ALAMDG,
& ALAMDR, DEKMIP, GMOCM2, DELAGE, RCMTOT, ALITOT, ALPTOT,
& ALNTOT, ALGTOT, ALRTOT, TOTMIP, SRHTOT, AGEAGE
* .................................................................
* here other possible declarations
* .................................................................
WRITE (*,*) ’ File name?’
READ (*,’(A80)’) FILNAM
OPEN (FILE = FILNAM, UNIT = LUNRAY, STATUS = ’OLD’, FORM =
& ’UNFORMATTED’)
* loop over several rays
1 CONTINUE
* read info about ray starting point
READ (LUNRAY, END = 3000, ERR=1000) NRAYRN, MREG, MLATTC,
& MMAT, EKIN
READ (LUNRAY, END = 1000) XX, YY, ZZ, R2, R3, THETAP, PHIPOS
READ (LUNRAY, END = 1000) TXX, TYY, TZZ, THETAD, PHIDIR, ETADIR
* ................................................................
* here possible user code to manipulate values read
* ................................................................
* loop over further positions along the ray path
2 CONTINUE
* read info about next point
READ (LUNRAY, END = 2000) MREGLD, MLATLD, MMATLD,
& MATNAM, IDISC
READ (LUNRAY, END = 2000) XX, YY, ZZ, R2, R3, THETAP, PHIPOS
READ (LUNRAY, END = 2000) RCM, ALAMDI, ALAMDP, ALAMDN, ALAMDG,
& ALAMDR, DEKMIP, GMOCM2, DELAGE
READ (LUNRAY, END = 2000) RCMTOT, ALITOT, ALPTOT, ALNTOT,
& ALGTOT, ALRTOT, TOTMIP, SRHTOT,
& AGEAGE
* .............................................................
* possible user code to manipulate values read
* .............................................................
IF ( IDISC .EQ. 0 ) THEN
* ...........................................................
* possible user code at the end of ray step
* ...........................................................
375
376 RAY pseudo-particles
GO TO 2
END IF
* ...............................................................
* possible user code at the end of ray trajectory
* ...............................................................
* new ray
GO TO 1
1000 CONTINUE
WRITE(*,*) ’ Incomplete data on file about ray starting point’
GO TO 3000
2000 CONTINUE
WRITE(*,*) ’ Incomplete data on file about ray trajectory’
3000 CONTINUE
* .................................................................
* possible user code at the end of analysis
* .................................................................
CLOSE (UNIT = LUNRAY)
END
ALGTOT = cumulative distance traversed so far, in units of maximum photon mean free paths (i.e., at the so-called
Compton minimum). Note: if the EMF option has not been requested, explicitly or implicitly by default,
ALGTOT has always zero value
ALRTOT = cumulative distance traversed so far, in units of radiation lengths. Note: if the EMF option has not been
requested, explicitly or implicitly by default, ALRTOT is calculated but only in an approximate way
TOTMIP = cumulative energy lost so far by a minimum ionising muon
SRHTOT = cumulative distance traversed so far in g/cm2
AGEAGE = cumulative time elapsed so far in seconds
Chapter 15
One of the two colliding beams (from now on the “first beam”) is a beam of hadrons (including protons or
heavier nuclei), the other one (the “second beam”) is a beam of protons or heavier nuclei but not of other hadrons.
The different SDUM values allow to define the two beams in different ways. The first and the second continuation
cards are identical for all SDUM options.
First card:
WHAT(1) = lab momentum x-component for hadrons or nuclei of the first beam (GeV/c)
WHAT(2) = lab momentum y-component for hadrons or nuclei of the first beam (GeV/c)
WHAT(3) = lab momentum z-component for hadrons or nuclei of the first beam (GeV/c)
WHAT(4) = lab momentum x-component for protons or nuclei of the second beam (GeV/c)
WHAT(5) = lab momentum y-component for protons or nuclei of the second beam (GeV/c)
WHAT(6) = lab momentum z-component for protons or nuclei of the second beam (GeV/c)
WHAT(1) = σx (cm) for the Gaussian sampling of the interaction position around XBEAM (the x position defined
in the BEAMPOSit card)
WHAT(2) = σy (cm) for the Gaussian sampling of the interaction position around YBEAM (the y position defined
in the BEAMPOSit card)
WHAT(3) = σz (cm) for the Gaussian sampling of the interaction position around ZBEAM (the z position defined
in the BEAMPOSit card)
WHAT(4) = sampling limit, in σ, applying along x, y, and z
≤ 0.0: ignored (no limit)
WHAT(5) = particle id number (or corresponding name) of hadrons of the first beam
Default :as defined by the BEAM card (and by the HI–PROPErt card in case it is a heavy ion)
SDUM = “ & ” in any position in column 71 to 78 (or in the last field if free format is used)
Default : as defined by the BEAM card (and by the HI–PROPErt card in case it is a heavy ion)
Second continuation card (only needed if the second is a beam of heavy ions):
378
Special source: colliding beams 379
WHAT(2) : specifies the divergence of the first beam in the crossing plane (in rad):
σthC for the Gaussian sampling of the angle between the direction of the particle and the most
probable direction as defined by WHAT(1–3) of the first card
WHAT(3) : specifies the divergence of the first beam in the orthogonal plane (in rad):
σthO for the Gaussian sampling of the angle between the direction of the particle and the most
probable direction as defined by WHAT(1–3) of the first card
WHAT(4) : specifies the divergence of the second beam in the crossing plane (in rad):
σthC for the Gaussian sampling of the angle between the direction of the particle and the most
probable direction as defined by WHAT(4–6) of the first card
WHAT(5) : specifies the divergence of the second beam in the orthogonal plane (in rad):
σthO for the Gaussian sampling of the angle between the direction of the particle and the most
probable direction as defined by WHAT(4–6) of the first card
WHAT(6) : not used
SDUM = “ && ” in any position in column 71 to 78 (or in the last field if free format is used)
First card:
WHAT(2) = polar angle (rad) between the momentum of hadrons or nuclei of the first beam and the z (positive)
direction (must be between 0 and π/2)
WHAT(3) = azimuthal angle (deg) defining the crossing plane (see Note 2) below)
WHAT(5) = polar angle (rad) between the momentum of protons or nuclei of the second beam and the -z (neg-
ative) direction (must be between 0 and π/2)
WHAT(6) : not used
First and second continuation cards: as for SDUM = PPSOURCE (see above)
First card:
WHAT(1) = lab momentum of protons or nuclei of both beams (GeV/c) (see Note 3))
WHAT(3) = azimuthal angle (deg) defining the crossing plane (see Note 2))
First and second continuation cards: as for SDUM = PPSOURCE (see above)
Notes
1) When SPECSOUR is used to define two colliding beams (SDUM = PPSOURCE, CROSSASY or CROSSSYM),
380 Special source: colliding beams
Dpmjet must be linked through the script $FLUPRO/flutil/ldpmqmd. A card PHYSICS with SDUM = LIMITS
sets the maximum CMS momentum (GeV/c) in WHAT(1) for Dpmjet initialization purposes.
2) With SDUM = CROSSASY and CROSSSYM, the half plane containing the two proton momenta at the sampled
interaction point (XXX, YYY, ZZZ) is assumed to be limited by an axis z’ passing through (XXX, YYY)
and parallel to the z-axis. Polar and azimuthal angles are defined with respect to z’, with WHAT(3) = 0
corresponding to the xz’ half plane towards positive x, WHAT(3) = 90 corresponding to the yz’ half plane
towards positive y, WHAT(3) = 180 corresponding to the xz’ half plane towards negative x, WHAT(3) = 270
corresponding to the yz’ half plane towards negative y.
3) With SDUM = CROSSSYM, the particles of the two beams are identical and have the same momentum. They
can only be protons or nuclei: other hadrons are excluded.
4) The particles produced in the interaction of two beams have generation number 1 (i.e., they are considered as
primary particles)
5) All the scores in a run where the source consists of two colliding beams are normalised per beam interaction
Chapter 16
Command SPECSOUR does not define only cosmic ray sources, but also a number of other pre-defined complex
sources that cannot be described by the simple keywords BEAM, BEAMPOSit, BEAMAXES and HI–PROPErt. To
avoid confusion, SPECSOUR input related to cosmic rays is described separately in this Chapter, in Sec. 16.3.
A number of tools and packages have been developed for the Fluka environment to simulate the production
of secondary particles by primary cosmic rays interacting with the Earth’s atmosphere. These tools, in different
stand-alone versions, have already been successfully used for fundamental physics research [22, 24].
The set of Fluka tools for cosmic ray simulation includes a set of core routines to manage event generation,
geomagnetic effects and particle scoring, and the following stand-alone data files and programs :
– file atmomat.cards: it contains the material definitions for the density profile of the U.S. Standard Atmosphere.
These cards must be inserted (or the file included with the #include directive) into the Fluka input file.
– file atmogeo.cards: it contains an example of a 3D geometrical description of the Earth atmosphere, generated
in according with the previous data cards (and corresponding density profile). This geometry includes the
whole Earth.
– program atmloc 2011.f: it prepares the description of the local atmosphere geometry with the atmospheric
shells initialised by option GCR–SPE. This geometry includes only a slice of the Earth geometry, centered
around the geomagnetic latitude input by the user
– files <iz>phi<MV>.spc: GCR All-Particle-Spectra for the izth ion species (iz = 1, ..., 28), modulated for
the solar activity corresponding to a φ parameter <MV> MegaVolt. φ = 500 MV roughly corresponds to solar
minimum, while φ = 1400 MV roughly corresponds to solar maximum.
– file allnucok.dat: GCR All-Nucleon Spectra
– file sep20jan2005.spc: spectra for the Solar Particle Event of Jan 20th, 2005
– file sep28oct2003.spc: spectra for the Solar Particle Event of Oct 28th, 2008
381
382 Special source: cosmic rays
The ion composition of the galactic flux is derived from a code [17] which considers all elemental groups from Z = 1
to Z = 28. The spectrum is modified to follow recent data sets (AMS [9, 10] and BESS [184] data of 1998) up to
100 GeV according to the so-called ICRC2001 fit [94]. The spectrum components are written into 28 files. The name
of the files has the form (Z+phi+<PhiMV>+.spc). The first two characters of each file name are the atomic number
of a different primary spectrum ion (e.g., 01: protons, 02: alpha. . . ). They are followed by the solar modulation
parameter used for generating the spectrum (7 characters) and by an extension .spc.
The .spc files are spectra without geomagnetic cutoff. The .spc files are used together with an analytical
calculation of the rigidity cutoff, according to a centered dipole approximation of the Earth geomagnetic field, adapted
to result in the vertical cutoff inserted into the input file (SPECSOUR command, SDUM = GCR–IONF, WHAT(2) of
the continuation card), at the geomagnetic latitude and longitude of interest.
The All-Nucleon Spectrum is obtained modifying the fit of the All-Nucleon flux proposed by the Bartol group [7],
using the All-Particle Spectrum (16.1.1) up to 100 GeV and data published in ICRC 2003. Fluxes are read from a file
named allnucok.dat giving the total energy (GeV), the fluxes (E × dN/dE) and the neutron/proton ratios. This
option (”All Nucleon Flux”) is chosen with command SPECSOUR and SDUM = GCR–ALLF (see details in 16.3). The
user can decide whether to sample neutrons and protons from the file and to transport them using the superposition
model, or to consider all neutrons as being bound in alpha particles and to transport protons and alphas. This latter
choice has the advantage of taking better into account the magnetic field, which has no effect on the neutrons.
For the proton component at energies larger than 100 GeV, using the normalization obtained at 100 GeV, a
spectral index γ = −2.71 is assumed. A spectral index γ = −3.11 is assumed above the knee at 3000 TeV. For what
concerns the He component, γ = −2.59 is used above 100 GeV and a charge-dependent knee is assumed according
to the rule: Enucleon = Z × 3000 TeV/A. Higher Z components have been grouped in CNO, MgSi and Fe sets and
treated using an All-Particle Spectrum with the above mentioned charge-dependent knee parameterisation.
The deviation from the power law, observed below 10 GeV, is a consequence of the influence of the solar wind called
solar modulation [96]. Flux intensity in this energy range is anti-correlated to the solar activity and follows the
sun-spot 11-year cycle. The correlation between the solar activity and the modulation of the cosmic rays flux has been
studied by monitoring the flux of atmospheric neutrons. In fact, a flux of low energy neutrons (E ≈ 108 − 109 eV)
is produced in the interaction of primary CRs with the atmosphere and it is mostly due to low energy primaries
(1–20 GeV), due to the rapid fall of the primary flux intensity with energy. One assumes that far from the solar
system there exists an unmodified flux called Local Interstellar Spectrum, which is modified within the solar system
by the interaction with the solar wind. This interaction is well described by the Fokker-Planck diffusion equation.
Describing the solar wind by a set of magnetic irregularities, and considering these irregularities as perfect elastic
scattering centres, one obtains the Fokker-Planck diffusion equation. For energies above 100 MeV this equation can
be solved using the ”Force Field Approximation” [42]. According to this approximation, at a given distance from the
Sun, for example at 1 a.u., the population of CRs at energy Einterstellar is shifted at the energy E0 as an energy loss
mechanism due to a potential V:
E0 = Einterstellar + Z × Vsolarwind (t) (16.1)
The solar wind potential at a given distance from the Sun depends on only one parameter, the time: V = V (t). So
it doesn’t matter what the interstellar flux is: given a flux on the Earth at a time t, one can find the flux at another
time just from the relative variation of the solar wind potential φ. This variation can be derived from the neutron
monitor counts [17]. In the case of the fit used by Fluka an offline code [17] makes use of an algorithm which takes
into account a specific φ value, or the counting rate of the CLIMAX neutron monitor [54] to provide the prediction
for the flux at a specific date or for a given value of the potential which expresses the effect of the interplanetary
modulation of the local interstellar spectrum. Even if the model is not a description of the processes and of the
manner in which they occur, it reasonably predicts the GCR modulation at Earth.
The geometrical setup is a 3-dimensional spherical representation of the whole Earth and of the surrounding atmo-
sphere. This is described by a medium composed by a proper mixture of N, O and Ar, arranged in 100 concentric
shells, extending up to about 70 km of altitude and possibly cut by a truncated cone centred on the geographical
Special source: cosmic rays 383
location.
The Fluka package makes use of a density vs. height profile of atmosphere. An external program containing
a functional fit to this profile has been used to generate at the same time an input geometry file, together with the
data cards for material description (each atmospheric layer, having its proper density, needs to be assigned a different
Fluka material). The geometry produced, and distributed with the name atmogeo.cards is a spherical represen-
tation of the whole Earth atmosphere. The material definitions and assigment contained in the file atmomat.cards
correspond to the density profile of the U.S. Standard atmosphere. The cards contained in atmomat.cards shall be
included by the user in her/his input file. In addition, the user can specialize this geometry to a given geomagnetic
latitude and longitude with the help of the atmloc 2011.f auxiliary program. In this way, the geometry will contain
only a slice of the atmosphere, centered on the given position. The local geometry file produced by atmloc 2011.f
is named atmloc.geo. The user shall rename this geometry file for further use. More auxiliary files are produced
by atmloc 2011.f: the file atmlocmat.cards contains additional material assignments to be included in the in-
put together with the ones from atmogeo.cards; the file atmloc.sur contains data used by Flukaruntime, and
normalization areas.
The geometry (see Fig. 16.1) is built using two truncated cones (TRC) whose vertex is in the centre of the Earth, the
base is out of the atmosphere and the altitude (considering a geographical location in the northern hemisphere) is
in the direction of the Earth radius which passes through the North Pole. The angular span between the two cones
contains the atmosphere of interest for the latitude of interest. In addition there is a third cone placed in the opposite
direction: its vertex is where the other two cones have the base, its base is out of the atmosphere and its height is
in the direction of the Earth radius which passes through the South Pole. A similar geometry can be built for a
requested latitude in the southern emisphere.
So the complete geometry of the local model, built with the auxiliary program atmloc2011.f, is made of:
– a main series of layers made from the part of the atmospheric shells between the two cones (this is the part
where the scoring takes place),
– two series of side layers made from the part of the atmospheric shells between one of the two cones and the
third one. These additional layers are needed to take into account the primary and secondary particles which
don’t come from the vertical direction but can anyway reach the region of interest.
The atmosphere can be roughly characterized as the region from sea level to about 1000 km altitude around the globe,
where neutral gases can be detected. Below 50 km the atmosphere can be assumed to be homogeneously mixed and
can be treated as a perfect gas. Above 80 km the hydrostatic equilibrium gradually breaks down as diffusion and
vertical transport become important.
384 Special source: cosmic rays
Table 16.1 shows the U.S. Standard Atmosphere depth vs. altitude and vs. Fluka atmospheric layer.
In the last 50 years measurements of the geomagnetic field configuration have been performed regularly with increasing
precision, revealing a yearly weakening of the field intensity of 0.07% and a westward drift of ≈ 0.2 degrees per year
over the Earth ’s surface.
This field can be described, to first order, as a magnetic dipole tilted with respect to the rotation axis by
≈ 11.5◦ , displaced by ≈ 400 km with respect to the Earth’s center and with a magnetic moment M = 8.1×1025 G cm3 .
The dipole orientation is such that the magnetic South pole is located near the geographic North pole, in the
Greenland, at a latitude of 75◦ N and a longitude of 291◦ . The magnetic North pole is instead near the geographic
South pole, on the border of the Antarctica. The intensity at the Earth’s surface varies from a maximum of ≈ 0.6 G
near the magnetic poles to a minimum of ≈ 0.2 G in the region of the South Atlantic Anomaly (SAA), between Brazil
and South Africa. The complex behavior of the equipotential field lines is mainly a consequence of the offset and tilt.
In Fluka the geomagnetic field is taken into account in two different stages of the simulation chain.
Special source: cosmic rays 385
1) Effect of geomagnetic cutoff which modulates the primary spectrum: at a given location (point of first inter-
action of primary particles) and for a given direction a threshold in magnetic rigidity exists. The closer the
injection point is to the geomagnetic equator, the higher will be the vertical rigidity threshold. The standard
possibility offered to the user is to evaluate the geomagnetic cutoff making use of a dipolar field centered with
respect to the centre of the Earth, adapted to give the “correct” vertical rigidity cutoff for the geographic
location under examination.
Under this approximation, an analytical calculation of the cutoff can be performed and the Fluka source rou-
tine for galactic cosmic rays can apply the resulting geomagnetic cutoff. In case an off-set dipole (not provided
at present) or more sophisticated approaches are deemed necessary, a spherical harmonic expansion model like
the IGRF model is available [54]. However no default mean is provided for making use of these higher order
approximations for computing geomagnetic cutoff’s, since no analytical calculation is possible, and a numerical
(back)tracking of the primary particle from(/to) infinity is required.
Please note that activating these more realistic options for the earth geomagnetic field by means of the GCR–SPE
card has only the effect of using the resulting field while showering in the atmosphere (see next point), a minimal
correction with respect to the dominant effect of the geomagnetic cutoff.
2) The local geomagnetic field can be taken into account during shower development in the atmosphere. The field
is automatically provided by the default MAGFLD Fluka user routine, in accordance to the option selected in the
GCR–SPE card. For local problems, provided the coordinate system is consistently used (that is, geomagnetic
coordinates for the dipolar field, geographic ones for the multipolar field) there is no need to provide any
orientation or intensity information about the field.
16.2 Scoring
The usual scoring options (USRBDX, USRYIELD. . . ) can be used to define detectors to calculate the fluence of
different radiation fields.
Continuation card:
WHAT(2) = vertical geomagnetic cutoff at central latitude for WHAT(1) = 2.0, no meaning otherwise
For SDUM = SPE–SPEC, SPE–2003 or SPE–2005: the source is a Solar Particle Event
The input parameters are the same for the three SDUM values.
For SDUM = SPE–SPEC or SPE–2005, the spectrum is read from a file sep20jan2005.spc.
For SDUM = SPE–2003, the spectrum is read from a file sep28oct2003.spc.
Continuation card:
SDUM = “ & ” in any position in column 71 to 78 (or in the last field if free format is used)
* follow
*
MATERIAL 8.781E-08 8.0 AIR001
COMPOUND -.9256E-03 5.0 -.2837E-03 6.0 -.01572E-3 7.0 AIR001
MATERIAL 1.036E-07 9.0 AIR002
COMPOUND -.9256E-03 5.0 -.2837E-03 6.0 -.01572E-3 7.0 AIR002
...
* Idem: Mat-prop cards:
MAT-PROP 7.168E-05 8.0 8.0 1.0
MAT-PROP 8.453E-05 9.0 9.0 1.0
MAT-PROP 9.957E-05 10.0 10.0 1.0
...
* cards to Assign a different material to each region
ASSIGNMAT 8.0 1.0 103.0 102.0 1.
ASSIGNMAT 9.0 2.0 104.0 102.0 1.
ASSIGNMAT 10.0 3.0 105.0 102.0 1.
* Internal Vacuum: black hole in this case
ASSIGNMAT 1.0 101.0 0.0
* External Vacuum
ASSIGNMAT 2.0 102.0 1.0
* Internal vacuum black hole:
ASSIGNMAT 1.0 203.0 0.0
* Atmospheric black hole:
ASSIGNMAT 1.0 204.0 0.0
* External black hole:
ASSIGNMAT 1.0 205.0 0.0
ASSIGNMAT 1.0 206.0 0.0
MGNFIELD 20. 100. 30. 0.0 0.0 0.0
*
STEPSIZE -100. 100000. 1.0 206.0
PHYSICS +1. 1.0 39.0 DECAYS
*
* The following cards deactivate/activate the electromagnetic
* interactions. If ON, cuts on e+/- & gamma have to be defined in the
* EMFCUT cards.
*
EMF EMF-OFF
*EMFCUT -0.001 +0.0005 1.0 205.0
*EMFCUT -0.001 +0.0005 1.0 1.0 205.0
SCORE 208.0 210.0 201.0 229.0
*
* The following cards activate the scoring of double differential flux
* (energy * and angle) at the boundaries of some atmospheric layers
*
* Mu -
USRYIELD 2398.0 11.0 -21.0 98.0 99.0 1.0 970.6g/cm2
USRYIELD 20.552 0.576 48.0 11.47834 0.0 6.0 &
USRYIELD 2398.0 11.0 -21.0 99.0 100.0 1.0 1001.g/cm2
USRYIELD 20.552 0.576 48.0 11.47834 0.0 6.0 &
USRYIELD 2398.0 11.0 -21.0 100.0 101.0 1.0 1033.g/cm2
USRYIELD 20.552 0.576 48.0 11.47834 0.0 6.0 &
* Mu +
USRYIELD 2398.0 10.0 -22.0 98.0 99.0 1.0 970.6g/cm2
USRYIELD 20.552 0.576 48.0 11.47834 0.0 6.0 &
USRYIELD 2398.0 10.0 -22.0 99.0 100.0 1.0 1001.g/cm2
USRYIELD 20.552 0.576 48.0 11.47834 0.0 6.0 &
USRYIELD 2398.0 10.0 -22.0 100.0 101.0 1.0 1033.g/cm2
USRYIELD 20.552 0.576 48.0 11.47834 0.0 6.0 &
...
RANDOMIZ 1.0
START 100000.
USROCALL
Special source: cosmic rays 389
STOP
Chapter 17
Synchrotron radiation photons are assumed to be emitted by a particle (most commonly an electron or a
positron, but any charged particle is allowed), moving along one or two circular arcs or helical paths. The emitting
particle is not transported. The bending is assumed to be due to a magnetic field of intensity and direction specified
by the user, but no magnet needs necessarily to be described in the geometry, and the magnetic field must not be
declared with command MGNFIELD nor assigned to any region with command ASSIGNMAt.
The program samples energy and direction of the synchrotron radiation photons from the proper energy and
angular distributions. Polarisation is implemented as a function of emitted photon energy. The emitting particle can
have any direction with respect to that of the magnetic field. Therefore, photon emission can occur along arcs (if the
particle direction is perpendicular to the magnetic field) or helical paths in the more general case.
The SPECSOUR command for synchrotron radiation extends over two cards. The input parameters are:
First card:
WHAT(3) > 0.0: curvature radius of the emitting particle trajectory (cm)
SDUM : SYNC–RAD if the z-component of the magnetic field versor is > 0.0,
SYNC–RDN if the z-component of the magnetic field versor is < 0.0
Continuation card:
WHAT(2) = x-coordinate of the starting point of a possible second path of same length (see Note 1))
390
Special source: synchrotron radiation 391
WHAT(3) = y-coordinate of the starting point of the second path (see Note 1))
WHAT(4) = z-coordinate of the starting point of the second path (see Note 1))
WHAT(5) = x-component of emitting particle direction versor at the beginning of the second path (see Note 1))
WHAT(6) = y-component of emitting particle direction versor at the beginning of the second path (see Note 1))
SDUM = “ & ” in any position in columns 71–78 (or in last field if free format is used)
Note
1) The starting point of the first arc or helical path as well as the initial direction of the emitting particle must
be defined in the BEAMPOS card.
Chapter 18
History of FLUKA
18.1 Introduction
The history of Fluka goes back to 1962-1967. During that period, Johannes Ranft was at CERN doing work on
hadron cascades under the guide of Hans Geibel and Lothar Hoffmann, and wrote the first high-energy Monte Carlo
transport codes.
Starting from those early pioneer attempts, according to a personal recollection of Ranft himself [173], it is
possible to distinguish three different generation of “Fluka” codes along the years, which can be roughly identified
as the Fluka of the ’70s (main authors J. Ranft and J. Routti), the Fluka of the ’80s (P. Aarnio, A. Fassò,
H.-J. Möhring, J. Ranft, G.R. Stevenson), and the Fluka of today (A. Fassò, A. Ferrari, J. Ranft and P.R. Sala ).
These codes stem from the same root and of course every new “generation” originated from the previous one.
However, each new “generation” represented not only an improvement of the existing program, but rather a quantum
jump in the code physics, design and goals. The same name “Fluka” has been preserved as a reminder of this
historical development — mainly as a homage to J. Ranft who has been involved in it as an author and mentor from
the beginning until the present days — but the present code is completely different from the versions which were
released before 1990, and far more powerful than them.
Around 1970, J. Ranft got a position at Leipzig University. During the SPS construction phase in the Seventies
he was frequently invited by the CERN-Lab-II radiation group, leader Klaus Goebel, to collaborate in the evaluation
of radiation problems at the SPS on the basis of his hadron cascade codes. These codes were Fluka and versions
with different geometries and slightly differing names [187]. Jorma Routti, of Helsinki University of Technology,
collaborated with Ranft in setting up several of such versions [156, 157]. The particles considered were protons,
neutrons and charged pions.
At that time, Fluka was used mainly for radiation studies connected with the 300 GeV project [65, 97, 98].
During that time, the development of Fluka was entirely managed by Ranft, although many suggestions for various
improvements came from Klaus Goebel, partly from Jorma Routti and later from Graham Stevenson (CERN). In that
version of Fluka, inelastic hadronic interactions were described by means of an inclusive event generator [157, 160].
In addition to nucleons and charged pions, the generator could now sample also neutral pions, kaons and antiprotons.
Ionisation energy losses and multiple Coulomb scattering were implemented only in a crude way, and a trans-
port cutoff was set at 50 MeV for all particles. The only quantities scored were star density and energy deposited. The
electromagnetic cascade and the transport of low-energy particles were not simulated in detail but the corresponding
energy deposition was sampled from “typical” space distributions.
392
Third generation 393
as a coordinator.
The existing versions of Ranft’s programs (at least 14) were unified into a single code under the name Fluka.
The new code was capable to perform multi-material calculations in different geometries and to score energy deposi-
tion, star density and differential “fluxes” (actually, angular yields around a target).
This second generation resulted in the release of several versions. In Fluka81 [128] only one geometry was
available (cylindrical ). High-energy hadronic events were still sampled from inclusive distributions, but the low-
energy generators Hadrin [100, 105] and Nucrin [101, 106] were introduced for the first time.
In Fluka82 [1,165], Cartesian and spherical geometries were added, and in principle Combinatorial Geometry
too (but the latter option was rarely used, since there was no scoring associated with it and it did not support charged
particle multiple scattering and/or magnetic fields). After a first release with the old inclusive hadron generator, an
update [2] was released soon in which a new quark-chain generator developed by Ranft and his collaborators was
introduced in a tentative way [161, 163, 164]. At least four Ph.D. projects at Leipzig University did contribute to this
new generator, based on the Dual Parton Model, known as Eventq. The model soon turned out to be superior by
far to all those used before in hadron Monte Carlo, and various versions of it were later adopted also in other codes
(Hetc [12, 13], Hermes [55], Calor [93], and the simulation codes used for the H1 and ZEUS experiments).
The link to the Egs4 program [140] was introduced in the Fluka86 version by G.R. Stevenson and A. Fassò,
as an alternative to the parameterised electromagnetic cascade used before. The link was working both ways, allowing
to transport gammas issued from π 0 decay, and also photohadrons. Production of the latter was implemented only
for energies larger than the ∆ resonance, in the frame of the Vector Meson Dominance model, by J. Ranft and
W.R. Nelson [168].
The possibility to work with complex composite materials was introduced in the Fluka81 version by Möhring
and Sandberg. P. Aarnio restructured the code by encapsulating all COMMON blocks into INCLUDE files. In that
version, and in Fluka87 which soon followed [4], several other new features were introduced. A first attempt at
simulating ionisation fluctuations (with the Landau approach) was contributed by P. Aarnio, and a rudimentary
transport of particles in magnetic fields was provided by J. Lindgren (for charged hadrons only). Some fluence
estimators (boundary crossing, collision, track-length) were added in a preliminary form by Alberto Fassò, based on
the same algorithms he had written for the Morse code [66]. J. Ranft and his group improved the Eventq hadron
generator with the inclusion of diffractive events and Fermi momentum and provided a first model (later abandoned)
of nucleus-nucleus collisions.
Practically none of these features, however, is surviving today in same form: in all cases, with the exception
of the hadron event generator, even the basic approach is now completely different.
Over a period of six years, Fluka evolved from a code specialised in high energy accelerator shielding, into a
multipurpose multiparticle code successfully applied in a very wide range of fields and energies, going much beyond
what was originally intended in the initial development reworking plan of Fassò and Ferrari. Just as examples, a few
of the fields where the modern Fluka has been successfully applied are listed in the following:
– Cosmic Rays: First 3D neutrino flux simulation, Bartol, MACRO, Notre-Dame, AMS, Karlsruhe (Cor-
sika)
– Neutron background in underground experiments (MACRO, Palo Verde)
– Accelerators and shielding: the very first Fluka application field
– Beam-machine interactions: CERN, NLC, LCLS, IGNITOR
– Radiation Protection: CERN, INFN, SLAC, Rossendorf, DESY, GSI, TERA, APS
– Waste Management and environment: LEP dismantling, SLAC
– Synchrotron radiation shielding: SLAC
– Background and radiation damage in experiments: Pioneering work for ATLAS
– all LHC experiments, NLC
– Dosimetry, radiobiology and therapy:
– Dose to Commercial Flights: E.U., NASA, AIR project (USA)
– Dosimetry: INFN, ENEA, GSF, NASA
– Radiotherapy: Already applied to real situations (Optis at PSI, Clatterbridge, Rossendorf/GSI)
– Dose and radiation damage to Space flights: NASA, ASI
– Calorimetry:
– ATLAS test beams
– ICARUS
– ADS, spallation sources (Fluka+EA-MC, C. Rubbia et al.)
– Energy Amplifier
– Waste transmutation with hybrid systems
– Pivotal experiments on ADS (TARC, FEAT)
– nTOF
This effort, mostly done in Milan by Ferrari and Paola Sala (also of INFN), started in 1989 and went off
immediately in many directions: a new structure of the code, a new transport package including in particular an
original multiple Coulomb scattering algorithm for all charged particles, a complete remake of the electromagnetic
part, an improvement and extension of the hadronic part, a new module for the transport of low-energy neutrons, an
extension of Combinatorial Geometry and new scoring and biasing facilities. At the end of 1990, most of these goals
had been achieved, although only in a preliminary form. All the new features were further improved and refined in
the following years.
One of the first changes which led to the modern Fluka was a complete redesign of the code structure. The main
change was a general dynamical allocation scheme allowing to obtain a great flexibility while keeping the global
memory size requirements within reasonable limits. Other improvements were a re-arrangement of COMMON blocks
to optimise variable alignment, a parameterisation of constants making the program easier to maintain and update,
the possibility to insert freely comments in input, and a special attention devoted to portability (Fluka87 could run
only on IBM under VM-CMS).
The greatest importance was attached to numerical accuracy: the whole code was converted to double precision
(but the new allocation scheme allowed for implementation also in single precision on 64-bit computers). As a result,
energy conservation was ensured within 10−10 .
A decision was also made to take systematically maximum advantage from the available machine precision,
avoiding all unnecessary rounding and using consistently the latest recommended set of the physical constant val-
ues [142]. Such an effort succeeded in strongly reducing the number of errors in energy and momentum conservation
and especially the number of geometry errors.
A double precision random number generator was also adopted, kindly provided by Fred James (CERN) [113],
and based on the algorithm of Ranmar by Marsaglia and Zaman of Florida State University [123,124]. The possibility
to initialise different independent random number sequences was introduced in 2001. In 2005, the newly proposed
double-precision generator proposed by Marsaglia and Tsang [125] has been implemented.
A deliberate choice was made at an early stage to give preference to table look-up over analytical parame-
terisations or rejection sampling. The burden of large file management was more than compensated by the better
Geometry 395
accuracy and increased efficiency. Cumulative tabulations optimised for fast sampling were initialised at run-time
for the materials of the problem on hand, and were obtained mainly from complete binary data libraries stored in
external files.
The concern for self-consistency was and still is the main guiding principle in the design of the code structure.
The same attention has been devoted to each component of the hadronic and of the electromagnetic cascade, with
the aim of obtaining a uniform degree of accuracy. For this reason, Fluka can now be used just as well to solve
problems where only a single component is present (pure hadron, neutron, muon or electromagnetic problems). It has
also been tried to give a complete description of the mutual interaction between the different components, preserving
the possible correlations.
A set of default settings recommended for several applications (shielding, radiotherapy, calorimetry etc.) was
introduced in 1996 to help the user in a difficult task, but essential to get reliable results.
18.4.2 Geometry
The original Combinatorial Geometry (CG) package from MAGI [99, 121] was adopted and extensively improved by
Fassò and Ferrari, starting from the one used in their improved version of the Morse code [60]. In 1990, new bodies
were added (infinite planes and cylinders) which made the task of writing geometry input much easier and allowed
more accurate and faster tracking.
CG had originally been designed for neutral particles, but charged particles definitely required a fully new
treatment near boundaries, especially when magnetic fields were present. Previous attempts to use CG to track
charged particles, in Fluka87, Egs4 and other codes, had always resulted in a large number of particle rejections,
due to rounding errors and to the “pathological” behaviour of charged particles scattering near boundaries and in
the practical impossibility to use CG for these purposes.
The tracking algorithm was thoroughly redesigned attaining a complete elimination of rejections. A new
tracking strategy brought about large speed improvements for complex geometries, and the so-called DNEAR facility
(minimum distance from any boundary) was implemented for most geometry bodies and all particles. A sophisticated
algorithm was written to ensure a smooth approach of charged particles to boundaries by progressively shortening
the length of the step as the particle gets closer to a boundary. Boundary crossing points could now be identified
precisely even in the presence of very thin layers. The combined effect of multiple scattering and magnetic/electric
fields was taken into account.
In 1994, the Plotgeom program, written by R. Jaarsma and H. Rief in Ispra and adapted as a Fluka subrou-
tine by G.R. Stevenson in 1990, was modified by replacing its huge fixed dimension arrays with others, dynamically
allocated. The same year, a repetitive (lattice) geometry capability was introduced in CG by Ferrari, and a powerful
debugger facility was made available.
In 1997-1998, following a request from the ATLAS experiment, INFN hired a fellow, S. Vanini, who, together
with Sala, developed an interface called Flugg which allows to use Fluka using the Geant4 geometry routines
for detector description [43, 44]. This interface was further improved by Sala and in recent times I. Gonzalez and
F. Carminati from ALICE.
In 2001-2002, in the framework of a collaboration between INFN-Milan and GSF (Germany), Ferrari developed
a generalised voxel geometry model for Fluka. This algorithm was originally developed to allow to use inside Fluka
the human phantoms developed at GSF out of real person whole body CT scans. It was general enough to be coupled
with generic CT scans, and it is already used in Rossendorf (Germany) for hadron therapy applications.
The number of particles which can be transported by Fluka was 25 in 1990; after the muon (anti)neutrinos and
several hyperons were added, the number increased to 36. In 1992, transport of light ions (deuterons, tritons, 3 He,
α) was introduced, without nuclear interactions. In 1996 τ leptons and neutrino transport (and in some cases
interactions) were added. In 1999 the transport of optical photons was set up. The same year charmed hadrons
brought the total number of transported particles to 63, plus all kinds of heavy ions.
The transport threshold, originally fixed at 50 MeV has been lowered since 1991 to the energy of the Coulomb
barrier. Below that value, charged particles were ranged out to rest.
396 Fluka history
The old Fluka allowed for two and three body phase-space like particle decays with no account for possible matrix
element, particle polarisations and higher multiplicities. Starting since 1995, particle decays have been rewritten
from scratch by Ferrari allowing for more than 3 body decays, implementing polarised decays for π’s, kaons, τ ’s and
muons when relevant, and introducing proper matrix elements for kaon and muon decays. Among these features,
polarised muon decays with the proper V–A matrix element were developed by G. Battistoni (INFN-Milan) and Kµ3
and Kl3 decays of K± /KLong with the proper matrix element were based on the Kl3dcay code kindly provided by
Vincenzo Patera (INFN-Frascati).
Various methods of particle decay biasing were also introduced by Ferrari (described later in 18.4.14).
Transport in magnetic fields was extended to electrons and positrons in 1990 by Ferrari. In 1992 and again in
1994, the magnetic field tracking algorithm was completely reworked by Ferrari and Sala in order to overcome the
many limitations of the previous one. The new algorithm was much more precise and fast, it supported complex
particle/boundary interactions typical of multiple scattering, allowed for charged!particle polarisation and did no
longer miss by construction any geometrical feature, even if small, met along the curved path.
The version of Egs4 which had been linked to Fluka87 was an early one, which did not include the Presta algorithm
for the control of the multiple scattering step and was therefore very sensitive to the step length chosen. In 1989,
Ferrari and Sala developed and implemented in Fluka an independent model which had several advantages even
with respect to Presta: it was accounting for several correlations, it sampled the path length correction accounting
for the first and second moment of its distribution, it allowed the introduction of high-energy effects (nuclear form
factors) and could be easily integrated within the Combinatorial Geometry package. The algorithm, which included
also higher order Born approximations, was very efficient and was taking care also of special high-energy effects, very
grazing angles, correlations between angular, lateral and longitudinal displacements, backscattering near a boundary
etc.
The Ferrari-Sala model, which has proved very robust and has been shown to reproduce with good accuracy
even backscattering experiments, was applied to both electrons and heavier charged particles. The final revision and
update of the algorithm were made in 1991. In 1995, the Fano correction for multiple scattering of heavy charged
particles [64] was introduced.
The treatment of ionisation losses was completely re-written in 1991-1992 by Fassò and Ferrari to eliminate many
crude approximations, and delta-ray production was added. Ranging of stopping charged particle was also changed.
Quenching according to the Birks law was introduced to calculate the response of scintillators.
Application of Fluka to proton therapy called for further refinements of stopping power routines in 1995,
with the inclusion of tabulated data of effective ionisation potentials and density effect parameters. Shell corrections
were added. The new treatment was fully compliant with ICRU recommended formulae and parameters and included
all corrections, including low-energy shell corrections as worked out by Ziegler et al..
In 1996, a new formalism for energy loss fluctuations by Ferrari [73] replaced the old treatment of Landau
fluctuations. This formalism, based on the statistical properties of the cumulants of a distribution, was applied
to both heavy charged particles and e+ e− , and was fully compatible with any user-defined threshold for delta ray
emission.
Other improvements concerned the possibility to define materials with local density different from average
(porous substances), and the ranging out of particles with energies lower than the transport cutoff.
In 1999-2000, heavy ion dE/dx was improved by the inclusion of effective Z and straggling (Ferrari).
High-energy energy loss mechanisms for heavy charged particles were implemented by Ferrari both as a
continuous and as an explicit treatment: bremsstrahlung and pair production in 1992, nuclear interaction via virtual
photons in 1993.
Time dependence 397
Time-dependent calculations were made available in Fluka for all particles since 1991. Time cutoffs could be set by
the user for both transport and scoring.
All the nuclear data used by the original evaporation routines by Dresner [59] (see below), were replaced by Ferrari at
an early stage with the most recent ones available from the NNDC database, thanks to the help of Federico Carminati.
In 1990, the isotopic composition of elements was included, taken from modern evaluations.
In 1991, the proton and neutron inelastic cross sections between 10 and 200 MeV were updated by Ferrari
and Sala with fits to experimental data. An accurate treatment of cross section energy dependence for all charged
particles, independent of the step size, was introduced at that stage through the fictitious-σ method.
Hadron-nucleus cross sections underwent further extensive reworking starting from 1994 by Ferrari and Sala.
The present treatment is based on a novel approach blending experimental data, data driven theoretical approaches,
PDG fits and phase shift analysis when available.
Elastic scattering on hydrogen of protons, neutrons, and pions was rewritten from scratch in 1994 by Ferrari
and Sala. The same work was done for kaons in 1997.
The original Egs4 implementation in Fluka was progressively modified, substituded with new algorithms and in-
creasingly integrated with the hadronic and the muon components of Fluka, giving rise to a very different code,
called EMF (Electro-Magnetic-Fluka). In 2005, the last remaining Egs routine has been eliminated, although some
of the structures still remind of the original Egs4 implementation.
The Ferrari-Sala multiple scattering algorithm was the first major addition in 1989. It has already been
described elsewhere since it was applied to hadrons and muons as well. Following its implementation, the whole
electron/positron transport algorithm had to be reworked from scratch in order to comply with the features (initial
and final step deflections, complex boundary crossing algorithm) of the new model.
In 1990, the treatment of photoelectric effect was completely changed. Shell-by-shell cross sections were
implemented, the photoelectron angular distribution [186] was added, taking into account the fine structure of the
edges, and production of fluorescence X-rays was implemented.
Many new features were added in 1991. The emission angle of pair-produced electron and positrons and that
of bremsstrahlung photons, which were only crudely approximated in the original Egs4 code, were now sampled from
the actual physical distributions.
The full set of the electron-nucleus and electron-electron bremsstrahlung cross sections, differential in photon
energy and angle, published by Seltzer and Berger for all elements up to 10 GeV [189] was tabulated in extended form
and introduced into the code together with a brand new sampling scheme by Fassò and Ferrari. The energy mesh was
concentrated, especially near the photon spectrum tip, and the maximum energy was extended to higher energies.
The Landau-Pomeranchuk-Migdal effect [119, 120, 126, 127] for bremsstrahlung and the Ter-Mikaelyan polarisation
effect [198] (suppressing soft photon emission) were implemented.
Positron bremsstrahlung was treated separately, using below 50 MeV the scaling function for the radiation
integral given by Kim [114] and differential cross sections obtained by fitting proper analytical formulae to numerical
results of Feng et al. [80]. The photon angular distribution was obtained by sampling the emission angle from the
double differential formula reported by Koch and Motz [115], fully correlated with the photon energy sampled from
the Seltzer-Berger distributions.
The Compton effect routines were rewritten in 1993 by Ferrari and Luca Cozzi (University of Milan), including
binding effects. At the end of the same year, the effect of photon polarisation was introduced for Compton, Rayleigh
and photoelectric interactions by Ferrari.
In 1993 and 1994, A. Fassò and A. Ferrari implemented photonuclear reactions over the whole energy range,
398 Fluka history
opening the way to the use of Monte Carlo in the design of electron accelerator shielding [70]. Giant Dipole Resonance,
Delta Resonance and high-energy photonuclear total cross sections were compiled from published data [74] (further
updated in 2000 and 2002), while the quasi-deuteron cross section was calculated according to the Levinger model,
with the Levinger’s constant taken from Tavares et al. [197], and the damping factor according to Chadwick et al. [53].
The photon interaction with the nucleus was handled in the frame of the Fluka hadronic event generators Peanut
and DPM (see below).
In 1995, a single Coulomb scattering option was made available for electrons and positrons by Ferrari and Sala.
The aim of this option was mainly to eliminate some artefacts which affected the angular distributions of charged
particles crossing a boundary, but it turned out very useful also to solve some problems at very low electron energy
or with materials of low density (gases). In the same year, the electron transport algorithm was reworked once more
by Ferrari and Sala introducing an adaptive scheme which “senses” close boundaries in advance and automatically
adapts the step length depending on their distance. Also in 1995 Ferrari discovered that the Egs4 implementation
of Møller and Bhabha scattering, still used at that time in Fluka, was flawed. The bug was duly reported to the
Egs4 authors who took corrective actions on their own code, while Ferrari developed a new algorithm for Møller and
Bhabha scattering for Fluka.
In 1997 mutual polarisation of photons emitted in positron annihilation at rest was introduced by Ferrari.
Cherenkov photon production and optical photon transport was implemented in 1999 by Ferrari. In 2002
scintillation photon production was added as well.
In 2005, new implementations of the pair production and Rayleigh scattering processes have been finalized.
In 1998-2001 an improved version of the Ferrari-Sala multiple scattering model was developed, introducing
further refinements and the so called “polygonal” step approach. This version is presently fully tested offline and will
be soon introduced into the production code.
In 2005, the need for an external data preprocessor has been eliminated, integrating all the needed function-
alities into the Fluka initialisation stage. At the same time, data from the EPDL97 [63] photon data library have
become the source for pair production, photoelectric and total coherent cross section tabulations, as well as for atomic
form factor data.
At the same time, Rayleigh scattering has been reworked from scratch by Ferrari with a novel approach,
and the photoeletric interaction model has been further improved with respect to the 1993 Ferrari-Sala approach,
extending it, among the other modifications, down to a few eV.
Finally the energy sampling for pair production has been completely reworked by Ferrari using a vastly
superior approach, which can distinguish between interactions in the nuclear or electron field, and properly samples
the element in a compound or mixture on which the interaction is going to occur. Thew algorithm is also capable of
producing meaningful results for photon energies close to thresholds where several corrections are important and the
symmetry electron/positron is broken, in similar fashion to the bremsstrahlung case.
The 50 MeV energy cutoff was one of the most important limitations of the Fluka87 version. The cutoff concerned
muons and all hadrons, but it was the absence of neutron transport below 50 MeV which constituted the most serious
drawback for many applications. The limitations stemmed from the increasing inadequacy of the hadron interaction
models in dealing with interactions below 1 GeV and with the lack of any detailed nuclear physics treatment, i.e.,
the lack of an evaporation model and low-energy particle production, at all energies.
Actually, several early attempts to overcome these weaknesses of the code had been made by H.-J. Möhring
, H. Kowalski and T. Tymieniecka (code Neuka [116, 202], for Uranium/Lead scintillator only) and J. Zazula (code
Flunev [207, 208]), with mixed results. The most promising approach was that of Jan Zazula, of the Institute of
Nuclear Physics in Cracow: he had coupled Fluka87 with the Evap-5 evaporation module which he had extracted
from the Hetc/KFA code, and then interfaced the code with the only available multi-group neutron cross section
library extending to 50 MeV and beyond, the HILO library.
The main limitations of these approaches was their inability to address the real deficiencies of the Fluka87
hadron interaction model, their lack of nuclear physics details and therefore the unreliability of their excitation energy
predictions, which indeed were never intended by the original authors for any real use.
hadron event generator 399
Furthermore, it became apparent that HILO had several weaknesses: the cross section set had been put
together by extending a low-energy one of rather coarse structure based on evaluated experimental data with the
addition of much less accurate data calculated with an intranuclear cascade code (Hetc); for the same reason the
library did not contain any information on (n,γ) generation above 20 MeV and was based on a different Legendre
angular expansion below and above that energy. And because the library contained a very small number of materials,
the possibilities of application were rather limited.
The approach followed by Ferrari and Sala to overcome those shortcomings was two-fold:
– improve/substitute the hadronic interaction models in order to describe with reasonable accuracy low-energy
particle production and medium-low energy particle interactions
– develop an ad-hoc neutron cross section library for Fluka extending up to 20 MeV (in collaboration with
G.C. Panini and M. Frisoni of ENEA-Bologna [57])
The former point is discussed in detail in the section on hadronic models, the latter in the following.
Since Ferrari and Sala had started to work on a preequilibrium model (later known as Peanut, see next
section) which was expected to cover intermediate energies more accurately than the traditional intranuclear cascade
codes, it was decided to lower the Fluka energy cutoff to 20 MeV (thus making HILO unnecessary) and to create a
dedicated multigroup neutron cross section library to be used with Fluka, with the more usual upper energy limit of
20 MeV. The task was carried out with the essential collaboration of G.C. Panini, an expert of an ENEA laboratory
in Bologna specialised in nuclear data processing for reactor and fusion applications. Several neutron cross section
libraries have been produced for Fluka over the years as a result of a contract between INFN-Milan and ENEA [57].
These libraries, designed by Ferrari, had a format which was similar to the Anisn one [62] used for example
by Morse [60], but which was modified to include partial cross sections and kerma factors for dose calculations
(critically revised). Because at that time there was still a US embargo on the most recent ENDF/B evaluated file,
the cross sections were originally derived from the European compilations JEF–1 and JEF–2. (Later, they were
regularly updated with the best ones available from JEF, ENDF, JENDL and others). The choice of materials was
particularly tailored on detector and machine applications for high-energy colliders, including also cryogenic liquids
at various temperatures, and was much wider than in most other libraries: it contained initially about 40 different
materials (elements or isotopes), which became soon 70 (in 1991) and are now more than 130. Hydrogen cross sections
were also provided for different H molecular bonds (H gas, water, polyethylene). Doppler reduced broadening was
implemented for a few materials at liquid argon (87 K) and liquid helium (approximately 0 K) temperatures.
The incorporation of the neutron multigroup transport module into Fluka by Ferrari was loosely based on
the approach followed in the Morse and other multigroup codes, Ferrari and Fassò had a deep expertise about. The
low-energy neutron transport and interaction routines were rewritten from scratch progressively introducing many
extra features which are detailed in the following. Capture and inelastic gamma generation was still implemented in
the multigroup framework, but gamma transport was taken care of by the EMF part of Fluka. Survival biasing
was left as an option to the user with the possibility to replace it by analogue survival.
Energy deposition computed via kerma factors was preserved, but in the case of hydrogen the recoiling protons
were explicitly generated and transported. The same was done with protons from the 14 N(n,p)14 C reaction due to
its importance for tissue dosimetry, and later for all reaction on 6 Li.
The new low-energy neutron transport was ready at the end of 1990 [82]. The corresponding Fluka version
was called FlukaN for a while to underline the neutron aspect, but the name was soon abandoned.
At the beginning of 1997, the possibility to score residual nuclei produced by low-energy neutrons was intro-
duced. Many improvements were made in that same year. Federico Carminati, who was using Fluka for calculations
related to C. Rubbia’s energy amplifier, added to the program a few routines and nuclear data necessary to score
low-energy fission products. Pointwise cross sections were introduced for the neutron interactions with hydrogen.
Ferrari and Sala also developed a model to sample gammas from neutron capture in cadmium, an important reaction
for which no data were available in evaluated cross section files. Similar schemes for capture photon generation were
established in 2001 for other nuclei (Ar, Xe) [77] . Pointwise neutron transport and interactions for 6 Li were also
provided.
The two Leipzig event generators developed in the 80’s, one for intermediate energies and the other for high energies
(> 5 GeV), were a remarkable achievement with great potentialities. In particular the high energy model was among
400 Fluka history
the first developed in the world based on partonic ideas and quark degrees of freedom (specifically on the so called
Dual Parton Model [45, 46]).
The part of the code concerning hadron-nucleus primary interactions at energies above 4 GeV has been
extensively extended and updated since 1987 and is now virtually a new model, even though the physics foundations
are still essentially the same. Several bugs and approximations have been removed too. The intermediate energy
resonance model has also been deeply modified and its use is currently restricted to few particles over a small energy
range. The newly developed preequilibrium-cascade model Peanut has progressively replaced this model.
The main lines of the work developed mostly in Milan by Ferrari and Sala starting from 1990 can be summarised
as follow [56, 87]:
– further develop and improve the high energy DPM-based part of the models. This was performed in 4 main
stages, which eventually led to an almost completely new code still based on the same physics foundations with
some extensions.
– introduce a self-consistent nuclear environment in both the medium and high energy models, allowing for a
physically meaningful excitation energy and excited residual A and Z calculations.
– develop a state-of-the-art evaporation/fission/break-up/deexcitation stage to describe the “slow” part of nuclear
interactions. These actually took place in two major steps.
– develop a completely new model (Peanut) based on a novel approach for describing the low-to-intermediate
(up to a few GeV) energy range, while progressively phasing out the improved version of the intermediate
energy Leipzig code. This effort took also place in two main steps.
In all the developments described in this section and also in some other sections, J. Ranft always acted as the main
mentor and source of theoretical and often practical support. Even when he did not contributed to the code directly,
his ideas, help and suggestions were essential part of its development.
The two models developed by the Leipzig group were initially improved by removing a number of known bugs
and approximations (mainly, but not only, in the kinematics). In the years 1990-1991 all hyperons and anti-hyperons
were added as possible projectiles, and most important, nuclear effects, previously restricted to Fermi momentum,
were expanded and treated more accurately, with an explicit treatment of the nuclear well potential, the inclusion of
detailed tables of nuclear masses to account for nuclear binding energy, a consistent exact determination of nuclear
excitation energy and an overall “exact” conservation of energy and momentum on an event-by-event basis. These
changes were the minimal modifications required for introducing a sensible evaporation module and related low-energy
particle production: they made up the first stage of upgrade of the intermediate and high energy event generator and
were performed by Ferrari and Sala.
In the following years, Negative Binomial multiplicity distribution, correlations between primary interactions
and cascade particles and better energy-angle distributions were implemented. Sea quark distributions were updated,
new distributions were used for the number of primary collisions using an improved Glauber cascade approach, and
reggeon mediated interactions (single chains) were introduced at the lower energy end of the application range of the
Dual Parton Model. An initial improvement of the diffraction treatment as well of the hadronisation algorithm were
performed. These developments ended up in the 1993 version, which represented the second stage of the high energy
generator development (and which was made available to Geant3 users, see later).
Several major changes were performed on both the intermediate and high energy hadron generator in the years
1994-1996 by Ferrari and Sala. The latter was extensively improved, bringing its results into much better agreement
with available experimental data from as low as 4 GeV up to several hundreds of GeV. A fully new treatment of
transverse momentum and of all DPM in general was developed, including a substantially improved version of the
hadronisation code and a new driver model for managing two-chain events. The existing treatment of high-energy
photonuclear reactions, previously already based on the VMD model but in an approximate way, was improved by
implementing the contribution of all different vector mesons, as well as the quasi-elastic contribution. The simulation
of diffractive events was completely reworked distinguishing between resonant, single-chain and two-chain events, and
a smeared mass distributions for resonance was introduced. This version of the model was completed in 1996 and
performed very well together with the new “sophisticated” Peanut when applied to a variety of problems, ranging
from radiation protection, to cosmic ray showers in the atmosphere and to the test beam of the ATLAS calorimeters.
The latest round of improvements originated by the new interest of Ferrari and Sala for neutrino physics,
triggered by their participation in the ICARUS experiment and resulted in several improvements in the high-energy
interaction model. In 1998, a new chain fragmentation/hadronisation scheme was put to use, and a new diffrac-
Peanut 401
tion model was worked out once more according to rigorous scaling, including low-mass diffraction and antibaryon
diffraction. In 1999, charm production was set up by Ranft and Ferrari (reasonable at least for integrated rates),
and charmed particle transport and decay were introduced. The chain building algorithm was thoroughly revised
to ensure a continuous transition to low energies, and a significant reworking was done on the chain hadronisation
process, providing a smooth and physically sound passage to chains made up by only two particles, resulting in an
overall better description of particles emitted in the fragmentation region. This model was thoroughly benchmarked
against data taken at WANF by NOMAD and the particle production data measured by SPY. It constituted the
basis for all calculations performed for CNGS, both in the early physics design stage and later in the optimisation
and engineering studies.
There were two main steps in the development of the Fluka preequilibrium cascade model (Peanut) by Ferrari and
Sala:
The first implementation of the Fluka cascade-preequilibrium model, the “linear” one was finalised in July
1991 [83]. The model, loosely based for the preequilibrium part on the exciton formalism of M. Blann and coworkers
called Geometry Dependent Hybrid Model (GDH) [32, 33] now cast in a Monte Carlo form, was able to treat nucleon
interactions at energies between the Coulomb barrier (for protons) or 10-20 MeV (for neutrons) and 260 MeV (the
pion threshold). The model featured a very innovative concept, coupling a preequilibrium approach with a classical
intranuclear cascade model supplemented with modern quantistic corrections. This approach was adopted for the
first time by Fluka and independently by the Lahet code [145] at LANL. Capture of stopping negative pions,
previously very crudely handled (the available alternatives being forced decay or energy deposited on the spot) was
also introduced in this framework. This first implementation was called “linear” since in the cascade part refraction
and reflection in the nuclear mean field were not yet taken into account, resulting in straight (“linear”) paths of
particles through the nuclear medium. First order corrections for these effects were anyway implemented on the
final state angular distributions. This model immediately demonstrated superb performances when compared with
nucleon induced particle production data. Its implementation into Fluka allowed to overcome some of the most
striking limitations of the code and permitted the use of the new neutron cross section library through its ability to
produce sound results down to 20 MeV: in this way it opened a huge range of new application fields for the code.
However, despite its nice performances, the “linear” cascade-preequilibrium model was always felt by Ferrari
and Sala as a temporary solution for the low end side of particle interactions, while waiting for something even more
sophisticated. The work on the “full” cascade-preequilibrium, which in the meantime had been called Peanut (Pre-
Equilibrium Approach to Nuclear Thermalisation) started at the end of 1991 and produced the first fully working
version by mid-1993. Despite its improved quality this version was not included into any of the general use Fluka
versions until 1995, due to its complexity and the overall satisfactory results of the “linear” one for most applications.
Till 1995, the full version was in use only by a few selected groups, including the EET group led by Carlo Rubbia at
CERN, which meanwhile decided to adopt Fluka as their standard simulation tools above 20 MeV, mostly due to
the superior performances of Peanut full version.
It would be too long to describe in details all features of this model, which represented a quantum jump in
the Fluka performances and a significant development in the field. Actually, Peanut combines an intranuclear
part and a preequilibrium part (very similar in the “linear” and full versions), with a smooth transition around
50 MeV for secondary nucleons and 30 MeV for primary ones. It included nuclear potential effects (refraction and
reflection), as well as quantal effects such as Pauli blocking, nucleon-nucleon correlations, fermion antisymmetrisation,
formation zone and coherence length (a new concept introduced by Ferrari-Sala which generalises to low energy and
two body scattering the formation zone concept). The model featured a sophisticated pion complex optical potential
approach, together with 2 and 3 nucleon absorption processes and took into account the modifications due to the
nuclear medium on the pion resonant amplitudes. For all elementary hadron-hadron scatterings (elastic, charge and
strangeness exchanges) extensive use was made of available phase-shift analysis. Particle production was described
in the framework of the isobar model and DPM at higher energies, using a much extended version of the original
Hadrin code from Leipzig, and the Fluka DPM model at higher energies.
In 1995, distinct neutron and proton nuclear densities were adopted and shell model density distributions were
introduced for light nuclei. The initial model extended the energy range of the original “linear” one from 260 MeV
to about 1 GeV in 1994, with the inclusion of pion interactions. Giant Resonance and Quasi-deuteron photonuclear
reactions were added in 1994 and improved in 2000. In 1996-1997 the emission of energetic light fragments (up to
402 Fluka history
α’s) in the GINC (Generalised IntraNuclear Cascade) stage emission has been described through the coalescence
mechanism.
The upper limit of Peanut was further increased in 1996 to 1.8 GeV for nucleons and pions, and to 0.6 GeV
for K+ /K0 ; then again one year later (2.4 GeV for nucleons and 1.6 GeV for pions), and in 2000 (3.5 GeV for both
pions and nucleons). In 1998, Peanut was extended to K− and K̄0 ’s induced interactions. In the 2005 version,
all nucleon and pion reactions below 5 GeV/c of momentum are treated by Peanut, while for kaons and hyperons
the upper threshold is around 1.5 GeV (kinetic energy). Since 2005 also antinucleon interactions are treated in the
Peanut framework. It is planned to progressively extend Peanut up to the highest energies by incorporating into
its sophisticated nuclear framework the Glauber cascade and DPM part of the high energy model.
One of the fall-outs of the work done for ICARUS was the introduction of nucleon decays and neutrino nuclear
interactions in 1997 [50], which prompted improvements in Peanut, for instance concerning Fermi momentum and
coherence length. Quasi-elastic neutrino interactions can be dealt with by Peanut natively; in 1999, the code was
coupled with the Nux neutrino-nucleon interaction code developed by André Rubbia at ETH Zurich to produce full
on-line neutrino-nucleus interactions, including resonance production and deep inelastic scattering. The combined
Fluka(Peanut)+Nux model gave outstanding results when compared with NOMAD data, therefore giving support
to all predictions done for ICARUS.
Negative muon capture was also introduced in 1997 due to ICARUS needs. To much surprise, it turned out to
be a key factor in the understanding of the unexpected background at the nTOF facility during its initial operation
phase in 2001.
Evaporation was initially implemented in Fluka in 1990-1991 by Ferrari and Sala through an extensively modified
version of the original Dresner’s model based on Weisskopf’s theory [59]. Relativistic kinematics was substituted to the
original one; fragmentation of small nuclei was also introduced, although initially only in a rough manner. The mass
scale was changed to a modern one and the atomic masses were updated from a recent compilation. Improvements
included also a more sophisticated treatment of nuclear level densities, now tabulated both with A and Z dependence
and with the high temperature behaviour suggested by Ignatyuk [110]. A brand new model for gamma production
from nuclear deexcitation was added, with a statistical treatment of E1, E2 and M1 transitions and accounting for
yrast line and pairing energy. This “initial capability” evaporation was used together with the first stage improved
high energy hadron generator and the HILO library for the very first calculations carried out in 1990 for the LHC
detector radiation environment. Later, in 1991, with the introduction of the “linear” preequilibrium model, a full
model coverage down to 20 MeV was available and the new neutron cross section library developed together with
ENEA-Bologna [57] started to be used.
In 1993 the RAL high-energy fission model by Atchison [16], kindly provided by R.E. Prael as implemented
in the Lahet code, was included after some extensive modifications to remove some unphysical patches which the
presence of a preequilibrium stage had now made unnecessary. The model was further developed and improved along
the years and little is now left of the original implementation. Competition between evaporation and fission in heavy
materials was implemented. This development was set off by a collaboration on energy amplifiers with C. Rubbia’s
group at CERN. Eventually, Ferrari joined that group in 1998.
In 1995, a newly developed Fermi Break-up model, with a maximum of 6 bodies in the exit channel, was
introduced by Ferrari and Sala to describe the deexcitation of light nuclei (A ≤ 17). This development provided
better multiplicities of evaporated neutrons and distributions of residual nuclei. The deexcitation gamma generation
model was improved and benchmarked in the following year.
A completely new evaporation treatment was developed by Ferrari and Sala in 1996 and 1997 in substitution of
the improved Dresner model. This new algorithm adopted a sampling scheme for the emitted particle spectra which
no longer made any Maxwellian approximation, included sub-barrier effects and took the full energy dependence
of the nuclear level densities into account. Gamma competition was introduced too. These physics improvements
allowed a much more accurate description of the production of residual nuclei. A refinement of this new package
took place in 2000/2001. The production of fragments up to mass 24 has been tentatively included around 2003
and subsequently developed and benchmarked [18] and is now available in the distributed version as an option to be
activated by the user.
Radioactivity 403
18.4.13 Radioactivity
Fluka is capable of making predictions about residual nuclei produced in hadronic and electromagnetic showers since
late 1994. The accuracy of these predictions steadily improved along the years, in particular after the introduction
of the new evaporation/fragmentation and the improvements and extensions of the Peanut model: versions before
the end of 1996 were unlikely to give satisfactory results. Of course, all Fluka versions prior to 1989 were totally
unable to formulate predictions on this issue. Since 1995, an offline code by Ferrari was distributed together with
Fluka, which allowed to compute offline the time evolution of a radionuclide inventory computed with Fluka, for
arbitrary irradiation profiles and decay times. In 2004–2005, this capability has been brought on line by Ferrari and
Sala, with an exact analytical implementation (Bateman equations) of the activity evolution during irradiation and
cooling down, for arbitrary irradiation conditions. Furthermore, the generation and transport of decay radiation
(limited to γ, β − , and β + emissions for the time being) is now possible. A dedicated database of decay emissions has
been created by Ferrari and Sala, using mostly informations obtained from NNDC, sometimes supplemented with
other data and checked for consistency. As a consequence, results for production of residuals, their time evolution
and residual doses due to their decays can now be obtained in the same run, for an arbitrary number of decay times
and for a given, arbitrarily complex, irradiation profile.
18.4.14 Biasing
Variance reduction techniques, a speciality of modern Fluka, have been progressively introduced along the years.
Transport biasing under user control is a common feature of low-energy codes, but in the high energy field biasing
has generally been restricted to built-in weighted sampling in event generators, not tunable by the user. In addition,
Monte Carlo codes are in general either weighted or analogue, but not both. In the modern Fluka, the user can
decide in which mode to run the code, and has the possibility to adjust the degree of biasing by region, particle and
energy.
Many different biasing options have been made available. Multiplicity reduction in high-energy hadron-nucleus
interactions was the first one to be introduced by Fassò (in 1987), to manage the huge number of secondaries produced
by the 20 TeV proton beams of SSC. Ferrari made possible for the user to tune it on a region dependent basis. In 1990
Ferrari added also geometry splitting and Russian Roulette for all particles based on user-defined region importances
and several biasing options for low-energy neutrons, inspired by Morse, but adapted to the Fluka structure.
Region, energy and particle dependent weight windows were introduced by Fassò and Ferrari in 1992. In this
case the implementation was different from that of Morse (two biasing levels instead of three), and the technique
was not applied only to neutrons but to all Fluka particles. Decay length biasing was also introduced by Ferrari
(useful for instance to improve statistics of muons or other decay products, or to amplify the effect of rare short-lived
particles surviving at some distance from the production point). Inelastic length biasing, similar to the previous
option, and also implemented by Ferrari, makes possible to modify the interaction length of some hadrons (and of
photons) in one or all materials. It can be used to force a larger frequency of interactions in a low-density medium,
and it is essential in all shielding calculations for electron accelerators .
Two biasing techniques were implemented by Fassò and Ferrari, which are applicable only to low-energy
neutrons.
– Neutron Non Analogue Absorption (or survival biasing) was derived from Morse where it was systematically
applied and out of user control. In Fluka it was generalised to give full freedom to the user to fix the ratio
between scattering and absorption probability in selected regions and within a chosen energy range. While it is
mandatory in some problems in order to keep neutron slowing down under control, it is also possible to switch
it off completely to get an analogue simulation.
– Neutron Biased Downscattering, also for low-energy neutrons, gives the possibility to accelerate or slow down
the moderating process in selected regions. It is an option not easily managed by the average user, since it
requires a good familiarity with neutronics.
Leading particle biasing, which existed already in Egs4, was deeply modified in 1994 by Fassò and Ferrari,
by tuning it by region, particle, interaction type and energy. A special treatment was made for positrons, to account
for the penetrating power of annihilation photons.
In 1997, in the framework of his work for ICARUS and CNGS, Ferrari implemented biasing of the direction
of decay neutrinos.
404 Fluka history
18.4.15 Scoring
The stress put on built-in generalised scoring options is another aspect of Fluka “philosophy” which differentiates
it from many other programs where users are supposed to write their own ad-hoc scoring routines for each problem.
This characteristics, which was already typical of the old Ranft codes, has allowed to develop in the modern Fluka
some rather sophisticated scoring algorithms that would have been too complex for a generic user to program. For
instance the “track-length apportioning” technique, introduced in 1990 by Fassò and Ferrari, used in dose and fluence
binning, which computes the exact length of segment travelled by the particle in each bin of a geometry independent
grid. This technique ensures fast convergence even when the scoring mesh is much smaller than the charged particle
step.
Different kinds of fluence estimators (track-length, collision, boundary crossing) were implemented in 1990-
1992, replacing the corresponding old ones. The dimension limitations (number of energy intervals) were removed
and replaced by a much larger flexibility due to dynamical memory allocation. Scoring as a function of angle with
respect to the normal to a surface at the point of crossing was also introduced. Facilities were made available to score
event by event energy deposition and coincidences or anti-coincidences between energy deposition signals in different
regions, and to study fluctuations between different particle histories.
The pre-existent option to write a collision file was completely re-written and adapted to the more extended
capabilities of the new code. In 1991, time gates became applicable to most scoring facilities, allowing to ignore
delayed radiation components such as multiply scattered low-energy neutrons.
In 1994, two new options were added: residual nuclei scoring and scoring of particle yields as a function of
angle with respect to a fixed direction. In the latter case, several new quantities can be scored, such as rapidity,
various kinematical quantities in the lab and in the centre-of-mass frame, Feynman-x etc.
In 2004–2005, as explained above, the possibility to follow on-line the radiation from unstable residual nuclei
has been implemented, together with an exact analytical calculation (Bateman equations) of activity evolution during
irradiation and cooling down. As a consequence, results for production of residuals and their effects as a function of
time can now be obtained in the same run.
Heavy ion transport (energy loss, effective charge and associated fluctuations, multiple scattering) was developed by
Ferrari as early as 1998 largely based on already existing tools in Fluka.
There was an increasing demand for extending the Fluka interaction models to heavy ions, both for basic and
applied physics applications (cosmic rays, hadrotherapy, radiation problems in space). A long standing collaboration
has been going on since 1997 with Prof. L. Pinsky, chair of the Physics Department at the University of Houston. This
collaboration became formal in 2000 with the issue of a NASA Grant covering three years of Fluka developments in
the field of heavy ion transport and interactions, as well as the development of user friendly tools based on ROOT
for a better management of the code (project FLEUR). Further support came from ASI, as a grant to a collaborating
group in Milan for hiring a person for one year devoted to these issues.
The Dpmjet code has been interfaced to cover the high (> 5 GeV/n) energy range, and an extensively modified
version of the Rqmd-2.4 code is used at lower energies.
At very low energy, below ≈ 0.1 GeV/n, a treatment based on the Boltzmann Master Equation (BME) has
been implemented [49, 51, 52].
In 2004, a model for electromagnetic dissociation of ions in ion-ion interactions has been implemented [18].
Dpmjet is a high energy hadron-hadron, hadron-nucleus and nucleus-nucleus interaction model developed by J. Ranft,
S. Roesler and R. Engel, capable to describe interactions from several GeV per nucleon up to the highest cosmic
ray energies. There are strong ties between the Fluka and the Dpmjet teams (with J. Ranft being author of
both) and the collaboration is going on since the mid 90’s. An interface with Dpmjet-2.5 was developed by Toni
Empl (Houston), Ferrari and Ranft [61]. The interface allows to treat arbitrary ion interactions in Fluka at any
energy in excess of 5 GeV/n. The excited projectile and target residual leftovers are passed back to the Fluka
evaporation/fission/break-up routines for the final deexcitation and “low” (in the excited residual rest frame) energy
particle production. As part of the interface work a new fast multiion/multienergy initialisation scheme has been
developed for Dpmjet and a new cross section algorithm has been worked out for runtime calculations based on a
Rqmd interface 405
An interface with the Dpmjet-3 [180] has been developed in collaboration with Stefan Roesler (CERN) and
is now available.
A very similar interface has been developed by Francesco Cerutti (University of Milan and INFN), Toni Empl
(University of Houston), Alfredo Ferrari, Maria Vittoria Garzelli (University of Milan and INFN) and Johannes
Ranft, with the Relativistic Quantum Molecular Dynamics code (Rqmd) of H. Sorge [191–193]. Also in this case
the evaporation and deexcitation of the excited residuals is performed by Fluka. Significant interventions on the
original code were necessary to bring under control the energy/momentum balance of each interaction to allow for
a meaningful calculation of excitation energy. This brand new development allows Fluka to be used for ions from
roughly 100 MeV/n up to cosmic ray energies. The results of this modified model can be found in [8, 14, 79]. Work
is in progress to develop a new original code to replace this Rqmd interface.
Quasi Elastic neutrino interactions have been implemented in Fluka in 1997. Between 1998 and 2008, an interface
to an external neutrino generator has been used, although not distributed. This interface was embedded into the
Peanut model, and allowed interesting studies of nuclear effects in neutrino interactions.
In 2008, a new generator for neutrino interactions on nucleons and nuclei has been developed and implemented
in Fluka. The neutrino-nucleon event generator handles Deep Inelastic Scattering (Nundis, mainly developed by
Mattias Lantz) and production of delta resonances (Nunres). Hadronisation after DIS is handled by the same
hadronisation model used in hadron-hadron interactions. Nundis and Nunres are embedded in Peanut to simulate
neutrino-nucleus reactions.
The present Fluka alone totals about 540,000 lines of Fortran code (25 MBytes of source code), plus some 60,000
lines (2 MBytes) of ancillary codes used offline to generate and/or test the various data files required for running.
Out of these, roughly 1/3 are associated with Peanut.
As a term of comparison, the latest release of the previous Fluka generation, Fluka87, contained roughly 30,000
lines (1.2 MBytes), out of which very few survive in the present code, mostly in the high energy generator and in the
old intermediate energy one.
References
[1] P.A. Aarnio, J. Ranft and G.R. Stevenson
A long writeup of the FLUKA82 program
CERN Divisional Report TIS–RP/106–Rev. (1984)
[2] P.A. Aarnio, J. Ranft and G.R. Stevenson
First update of FLUKA82, including particle production with a multi-chain fragmentation model (EVENTQ)
CERN TIS–RP/129 (1984)
[3] P.A. Aarnio, A. Fassò, H.-J. Möhring, J. Ranft, G.R. Stevenson
FLUKA86 user’s guide
CERN Divisional Report TIS–RP/168 (1986)
[4] P.A. Aarnio, J. Lindgren, J. Ranft, A. Fassò, G.R. Stevenson
Enhancements to the FLUKA86 program (FLUKA87)
CERN Divisional Report TIS–RP/190 (1987)
[5] P.A. Aarnio, A. Fassò, A. Ferrari, H.-J. Möhring, J. Ranft, P.R. Sala, G.R. Stevenson, J.M. Zazula
FLUKA: hadronic benchmarks and applications
Proc. MC93 Int. Conf. on Monte Carlo Simulation in High Energy and Nuclear Physics, Tallahassee (Florida),
22–26 February 1993. Ed. by P. Dragovitsch, S.L. Linn, M. Burbank, World Scientific, Singapore 1994, p. 88–99
[6] P.A. Aarnio, A. Fassò, A. Ferrari, H.-J. Möhring, J. Ranft, P.R. Sala, G.R. Stevenson, J.M. Zazula
Electron-photon transport: always so good as we think? Experience with FLUKA
Proc. MC93 Int. Conf. on Monte Carlo Simulation in High Energy and Nuclear Physics, Tallahassee (Florida),
22–26 February 1993. Ed. by P. Dragovitsch, S.L. Linn, M. Burbank, World Scientific, Singapore 1994, p. 100–
110
[7] V. Agrawal, T.K. Gaisser, P. Lipari and T. Stanev
Atmospheric neutrino flux above 1 GeV
Phys. Rev. D53, 1314–1323 (1996)
[8] H. Aiginger, V. Andersen, F. Ballarini, G. Battistoni, M. Campanella, M. Carboni, F. Cerutti, A. Empl,
W. Enghardt, A. Fassò, A. Ferrari, E. Gadioli, M.V. Garzelli, K.S. Lee, A. Ottolenghi, K. Parodi, M. Pelliccioni,
L. Pinsky, J. Ranft, S. Roesler, P.R. Sala, D. Scannicchio, G. Smirnov, F. Sommerer, T. Wilson and N. Zapp
The FLUKA code: new developments and application to 1 GeV/n Iron beams
Adv. Space Res. 35, 214–222 (2005)
[9] J. Alcaraz et al. (AMS Coll.)
Cosmic protons
Phys. Lett. B490 (2000) 27–35
[10] J. Alcaraz et al. (AMS Coll.)
Helium in near Earth orbit
Phys. Lett. B494 (2000) 193–202
[11] R.G. Alsmiller Jr., J.M. Barnes, J.D. Drischler
Neutron-photon multigroup cross sections for neutron energies ≤ 400 MeV (Revision 1)
Nucl. Instr. Meth. A249, 455–460 (1986)
[12] F.S. Alsmiller and R.G. Alsmiller, Jr.
Inclusion of correlations in the empirical selection of intranuclear cascade nucleons from high energy hadron-
nucleus collisions
Nucl. Instr. Meth. A278, 713–721 (1989)
[13] R.G. Alsmiller, Jr., F.S. Alsmiller and O.W. Hermann
The high-energy transport code HETC88 and comparisons with experimental data
Nucl. Instr. Meth. A295, 337–343 (1990)
[14] V. Andersen, F. Ballarini, G. Battistoni, M. Campanella, M. Carboni, F. Cerutti, A. Empl, A. Fassò, A. Ferrari,
E. Gadioli, M.V. Garzelli, K. Lee, A. Ottolenghi, M. Pelliccioni, L.S. Pinsky, J. Ranft, S. Roesler, P.R. Sala
and T.L. Wilson,
The FLUKA code for space applications: recent developments
Adv. Space Res. 34, 1338–1346 (2004)
[15] M. Antonelli, G. Battistoni, A. Ferrari, P.R. Sala
Study of radiative muon interactions at 300 GeV
Proc. VI Int. Conf. on Calorimetry in High Energy Physics (Calor 96), Frascati (Italy), 8–14 June 1996. Ed.
A. Antonelli, S. Bianco, A. Calcaterra, F.L. Fabbri
Frascati Physics Series Vol. VI, p. 561–570 (1997)
[16] F. Atchison
Spallation and fission in heavy metal nuclei under medium energy proton bombardment
406
References 407
Meeting on Targets for neutron beam spallation sources, Ed. G. Bauer, KFA Jülich Germany, Jül–conf–34
(1980)
[17] G.D. Badhwar and P.M. O’Neill
Galactic cosmic radiation model and its applications
Adv. Space. Res. 17 no.2, 7–17 (1996)
[18] F. Ballarini, G. Battistoni, F. Cerutti, A. Empl, A. Fassò, A. Ferrari, E. Gadioli, M.V. Garzelli, A. Ottolenghi,
L.S. Pinsky, J. Ranft, S. Roesler, P.R. Sala and G. Smirnov
Nuclear Models in FLUKA: Present Capabilities, Open Problems, and Future Improvements
AIP Conference Proceedings 769, 1197–1202 (2005)
[19] W.H. Barkas, W. Birnbaum, F.M. Smith
Mass-ratio method applied to the measurement of L-meson masses and the energy balance in pion decay
Phys. Rev. 101, 778–795 (1956)
[20] W.H. Barkas, J.N. Dyer, H.H. Heckman
Resolution of the Σ− -mass anomaly Phys. Rev. Lett. 11, 26–28 (1963) (Erratum 11, 138 (1963))
[21] V.S. Barashenkov, V.D. Toneev
Interactions of High Energy Particles and Nuclei with Nuclei (in Russian)
Atomizdat, Moscow (1972)
[22] G. Battistoni, A. Ferrari, P. Lipari, T. Montaruli, P.R. Sala and T. Rancati
A 3-dimensional calculation of the atmospheric neutrino fluxes
Astropart. Phys. 12, 315–333 (2000)
[23] A. Fassò, A. Ferrari, S. Roesler, J. Ranft, P.R. Sala, G. Battistoni, M. Campanella, F. Cerutti, L. De Biaggi,
E. Gadioli, M.V. Garzelli, F. Ballarini, A. Ottolenghi, D. Scannicchio, M. Carboni, M. Pelliccioni, R. Villari,
V. Andersen, A. Empl, K. Lee, L. Pinsky, T.N. Wilson and N. Zapp
The FLUKA code: Present applications and future developments
Computing in High Energy and Nuclear Physics 2003 Conference (CHEP2003), La Jolla, CA, USA, March
24–28, 2003, (paper MOMT004), eConf C0303241 (2003), arXiv:physics/0306162.
[24] G. Battistoni, A. Ferrari, T. Montaruli and P.R. Sala
The FLUKA atmospheric neutrino flux calculation
Astropart. Phys. 19, 269–290 (2003)
[25] T.H. Bauer, R.D. Spital, D.R. Yennie, F.M. Pipkin
The hadronic properties of the photon in high-energy interactions
Rev. Mod. Phys. 50, 261–436 (1978)
[26] M.J. Berger
Monte Carlo calculation of the penetration and diffusion of fast charged particles
In: B. Alder, S. Fernbach and M. Rotenberg (Eds.), Methods in Computational Physics 1, 135–215 (1963)
[27] H.A. Bethe
Zur Theorie des Durchgangs schneller Korpuskularstrahlen durch Materie
Ann. Physik 5, 325–400 (1930)
Selected Works of Hans A. Bethe, World Scientific, Singapore 1996, p. 77–154
[28] H.A. Bethe
Bremsformel für Elektronen relativistischer Geschwindigkeit
Z. Phys. 76, 293–299 (1932)
[29] H.A. Bethe and W. Heitler
On the stopping of fast particles and on the creation of positive electrons
Proc. Roy. Soc. A146, 83–112 (1934)
Selected Works of Hans A. Bethe, World Scientific, Singapore 1996, p. 187–218
[30] H.A. Bethe
Molière’s theory of multiple scattering
Phys. Rev. 89, 1256–1266 (1953)
[31] F. Biggs, L.B. Mendelsohn and J.B. Mann
Hartree-Fock Compton profiles for the elements
At. Data Nucl. Data Tables 16, 201–309 (1975)
[32] M. Blann
Hybrid Model for Pre-Equilibrium Decay in Nuclear Reactions
Phys. Rev. Lett. 27, 337–340 (1971)
[33] M. Blann
Importance of the Nuclear Density Distribution on Pre-equilibrium Decay
Phys. Rev. Lett. 28, 757–759 (1972)
[34] M. Blann
Preequilibrium decay
408 References
[54] http://ulysses.sr.unh.edu/NeutronMonitor/Misc/neutron2.html
[55] P. Cloth, D. Filges, R.D. Neef, G. Sterzenbach, Ch. Reul, T.W. Armstrong, B.L. Colborn, B. Anders and
H. Brückmann
HERMES, a Monte Carlo program system for beam-materials interaction studies
Report KFA/Jül–2203 (1988)
[56] G. Collazuol, A. Ferrari, A. Guglielmi, and P.R. Sala
Hadronic models and experimental data for the neutrino beam production
Nucl. Instr. Meth. A449, 609–623 (2000)
[57] E. Cuccoli, A. Ferrari, G.C. Panini
A group library from JEF 1.1 for flux calculations in the LHC machine detectors
JEF–DOC–340 (91) (1991)
[58] H. Daniel
Formation of Mesonic Atoms in Condensed Matter
Phys. Rev. Lett. 35, 1649–1651 (1975)
[59] L. Dresner
EVAP — A Fortran program for calculating the evaporation of various particles from excited compound nuclei
Oak Ridge National Laboratory report ORNL–TM–196 (1961)
[60] M.B. Emmett
The MORSE Monte Carlo radiation transport system
Oak Ridge National Laboratory report ORNL–4972 (1975)
Revision: ORNL–4972/R1 (1983)
Revision: ORNL–4972/R2 (1984)
[61] A. Empl, A. Fassò, A. Ferrari, J. Ranft and P.R. Sala,
Progress and Applications of FLUKA
Invited talk at the 12th RPSD Topical Meeting, April 14–18, 2002, Santa Fe, New Mexico, USA ,
electronic proceedings, American Nuclear Society ANS Order No. 700293, ISBN 8-89448-667-5
[62] W.W. Engle, Jr.
A User’s Manual for ANISN, A One-Dimensional Discrete Ordinate Transport Code with Anisotropic Scattering
Oak Ridge Report K–1693 (1967)
[63] D.E. Cullen, J.H. Hubbell and L. Kissel
EPDL97: the Evaluated Photon Data Library, ’97 Version
UCRL–50400, Vol. 6, Rev. 5 (1997)
[64] U. Fano
Inelastic collisions and the Molière theory of multiple scattering
Phys. Rev. 93, 117–120 (1954)
[65] A. Fassò, G.R. Stevenson
Air activity in the NAHIF complex
CERN Internal Report HS–RP/IR/78–45 (1978)
[66] A. Fassò
The CERN version of Morse and its application to strong-attenuation shielding problems
Proc. Topical Conference on Theory and Practices in Radiation Protection and Shielding, Knoxville (Tennessee)
22–24 April 1987, p. 462–471
[67] A. Fassò, A. Ferrari, J. Ranft, P.R. Sala, G.R. Stevenson, J.M. Zazula
Fluka92
Proc. Workshop on Simulating Accelerator Radiation Environments, Santa Fe (New Mexico) 11–15 January
1993,
Los Alamos report LA–12835–C (1994), p. 134–144
[68] A. Fassò, A. Ferrari, J. Ranft, P.R. Sala
FLUKA: present status and future developments
Proc. IV Int. Conf. on Calorimetry in High Energy Physics, La Biodola (Italy) 21–26 September 1993, Ed.
A. Menzione and A. Scribano, World Scientific, p. 493–502
[69] A. Fassò, A. Ferrari, J. Ranft, P.R. Sala, G.R. Stevenson, J.M. Zazula
A comparison of FLUKA simulations with measurements of fluence and dose in calorimeter structures
Nucl. Instr. Meth. A332, 459–468 (1993)
[70] A. Fassò, A. Ferrari, P.R. Sala
Designing electron accelerator shielding with FLUKA
Proc. of the 8th Int. Conf. on Radiation Shielding, Arlington (Texas) 24–28 April (1994), p. 643–649
[71] A. Fassò, A. Ferrari, J. Ranft and P.R. Sala
FLUKA: performances and applications in the intermediate energy range
Proc. of an AEN/NEA Specialists’ Meeting on Shielding Aspects of Accelerators, Targets and Irradiation
410 References
Facilities, Arlington (Texas) 28–29 April 1994. OECD Documents, Paris 1995, p. 287–304
[72] A. Fassò, A. Ferrari, J. Ranft, P.R. Sala
An update about FLUKA
Proc. 2nd Workshop on Simulating Accelerator Radiation Environments, CERN, Geneva (Switzerland), 9–11
October 1995, Ed. G.R. Stevenson, CERN Report TIS–RP/97-05, p. 158–170
[73] A. Fassò, A. Ferrari, J. Ranft, P.R. Sala
New developments in FLUKA modelling hadronic and EM interactions
Proc. 3rd Workshop on Simulating Accelerator Radiation Environments (SARE 3), 7–9 May 1997, KEK,
Tsukuba (Japan). Ed. H. Hirayama
KEK Proceedings 97–5 (1997), p. 32–43
[74] A. Fassò, A. Ferrari, P.R. Scala,
Total giant resonance photonuclear cross sections for light nuclei: a database for the Fluka Monte Carlo
transport code
Proc. 3rd Specialists’ Meeting on Shielding Aspects of Accelerators, Targets and Irradiation Facilities (SATIF3),
Tohoku University, Sendai, Japan, 12-13 May 1997, OECD-NEA 1998, p. 61
[75] A. Fassò, A. Ferrari, P.R. Sala
Electron-photon transport in FLUKA: status
Proceedings of the Monte Carlo 2000 Conference, Lisbon, October 23–26 2000, A. Kling, F. Barão, M. Naka-
gawa, L. Távora, P. Vaz eds., Springer-Verlag Berlin, p. 159–164 (2001)
[76] A. Fassò, A. Ferrari, J. Ranft, P.R. Sala
FLUKA: Status and Prospective for Hadronic Applications
Proceedings of the MonteCarlo 2000 Conference, Lisbon, October 23–26 2000, A. Kling, F. Barão, M. Nakagawa,
L. Távora, P. Vaz eds., Springer-Verlag Berlin, p. 955–960 (2001)
[77] A. Fassò, A. Ferrari, P.R. Sala and G. Tsiledakis,
“Implementation of Xenon capture gammas in FLUKA for TRD background calculations”
CERN Report ALICE–INT–2001–28 (2001)
[78] A. Fassò, A. Ferrari, G. Smirnov, F. Sommerer, V. Vlachoudis,
“FLUKA Realistic Modeling of Radiation Induced Damage”
Proceedings of a Joint International Conference on Supercomputing in Nuclear Applications and Monte Carlo
2010 (SNA + MC2010), Tokyo, Japan, October 17-21, 2010
[79] A. Fassò, A. Ferrari, S. Roesler, P.R. Sala, G. Battistoni, F. Cerutti, E. Gadioli, M.V. Garzelli, F. Ballarini,
A. Ottolenghi, A. Empl and J. Ranft
The physics models of FLUKA: status and recent developments
Computing in High Energy and Nuclear Physics 2003 Conference (CHEP2003), La Jolla, CA, USA, March
24–28, 2003, (paper MOMT005), eConf C0303241 (2003), arXiv:hep-ph/0306267
[80] I.J. Feng, R.H. Pratt and H.K. Tseng
Positron bremsstrahlung
Phys. Rev. A24, 1358–1363 (1981)
[81] A. Ferrari, P.R. Sala, R. Guaraldi, F. Padoani
An improved multiple scattering model for charged particle transport
Presented at the Int. Conf. on Radiation Physics, Dubrovnik, 1991
Nucl. Instr. Meth. B71, 412–426 (1992)
[82] A. Ferrari, P.R. Sala, A. Fassò, G.R. Stevenson
Can we predict radiation levels in calorimeters?
Proc. 2nd Int. Conference on Calorimetry in High-Energy Physics, 14–18 October 1991, Capri (Italy) World
Scientific, Singapore 1992, p. 101–116
CERN EAGLE Internal Note CAL–NO–005 (1991)
[83] A. Ferrari, P.R. Sala
A new model for hadronic interactions at intermediate energies for the FLUKA code
Proc. MC93 Int. Conf. on Monte Carlo Simulation in High Energy and Nuclear Physics, Tallahassee (Florida)
22–26 February 1993. Ed. by P. Dragovitsch, S.L. Linn, M. Burbank, World Scientific, Singapore, 1994, p. 277–
288
[84] A. Ferrari, P.R. Sala
Physics of showers induced by accelerator beams
Proc. 1995 “Frédéric Joliot” Summer School in Reactor Physics, 22–30 August 1995, Cadarache (France). Ed.
CEA, Vol. 1, lecture 5b (1996)
[85] A. Ferrari, P.R. Sala, J. Ranft, and S. Roesler
The production of residual nuclei in peripheral high energy nucleus–nucleus interactions
Z. Phys. C71, 75–86 (1996)
[86] A. Ferrari, P.R. Sala, J. Ranft, and S. Roesler
References 411
Cascade particles, nuclear evaporation, and residual nuclei in high energy hadron-nucleus interactions
Z. Phys. C70, 413–426 (1996)
[87] A. Ferrari, and P.R. Sala
The Physics of High Energy Reactions
Proc. of the Workshop on Nuclear Reaction Data and Nuclear Reactors Physics, Design and Safety, International
Centre for Theoretical Physics, Miramare-Trieste (Italy) 15 April–17 May 1996, Ed. A. Gandini and G. Reffo,
World Scientific, p. 424 (1998)
[88] A. Ferrari, and P.R. Sala
Intermediate and high energy models in FLUKA: improvements, benchmarks and applications
Proc. Int. Conf. on Nuclear Data for Science and Technology, NDST–97, Trieste (Italy), 19–24 May 1997. Ed.
G. Reffo, A. Ventura and C. Grandi (Bologna: Italian Phys. Soc.) Vol. 59, Part I, p. 247 (1997)
[89] A. Ferrari, T. Rancati, and P.R. Sala
Fluka applications in high energy problems: from LHC to ICARUS and atmospheric showers
Proc. 3rd Workshop on Simulating Accelerator Radiation Environments (SARE 3), 7–9 May 1997, KEK,
Tsukuba (Japan). Ed. H. Hirayama
KEK Proceedings 97–5 (1997), p. 165–175
[90] A. Ferrari, and P.R. Sala
Treating high energy showers
In: Training course on the use of MCNP in Radiation Protection and Dosimetry, ENEA, Roma (1998), p. 233–
264
[91] A. Ferrari, P.R. Sala, A. Fassò, and J. Ranft
FLUKA: a multi-particle transport code
CERN-2005-10 (2005), INFN/TC 05/11, SLAC-R-773
[92] V. Vlachoudis
FLAIR: A Powerful But User Friendly Graphical Interface For FLUKA
Proc. Int. Conf. on Mathematics, Computational Methods & Reactor Physics (M&C 2009), Saratoga Springs,
New York, 2009
[93] T.A. Gabriel, J.E. Brau and B.L. Bishop
The Physics of Compensating Calorimetry and the New CALOR89 Code System
Oak Ridge Report ORNL/TM–11060 (1989)
[94] T.K. Gaisser, M. Honda, P. Lipari and T. Stanev
Primary spectrum to 1 TeV and beyond
Proc. 27th International Cosmic Ray Conference (ICRC 2001), Hamburg, Germany, 7–15 Aug. 2001, p. 1643–
1646 (2001)
[95] J.A. Geibel, J. Ranft
Part VI: Monte Carlo calculation of the nucleon meson cascade in shielding materials
Nucl. Instr. Meth. 32, 65–69 (1965)
[96] L.J. Gleeson and W.I. Axford
Solar Modulation of Galactic Cosmic Rays
Astrophys. J., 154, 1011 (1968)
[97] K. Goebel, Ed.
Radiation problems encountered in the design of multi-GeV research facilities:
E. Freytag, J. Ranft, “Hadronic and electromagnetic cascades”
K. Goebel, L. Hoffmann, J. Ranft, G.R. Stevenson, “Estimate of the hadronic cascade initiated by high-
energy particles”
K. Goebel, J. Ranft, G.R. Stevenson, “Induced radioactivity”
J.H.B. Madsen, M.H. Van de Voorde, J. Ranft, G.B. Stapleton, “Radiation-induced damage to machine
components and radiation heating”
J. Ranft, “The interaction of protons in machine components and beam loss distributions”
CERN Yellow Report 71–21 (1971)
[98] K. Goebel, J. Ranft, J.T. Routti, G.R. Stevenson
Estimation of remanent dose rates from induced radioactivity in the SPS ring and target areas
CERN LABII–RA/73–5 (1973)
[99] W. Guber, J. Nagel, R. Goldstein, P.S. Mettelman, and M.H. Kalos
A geometric description technique suitable for computer analysis of both nuclear and conventional vulnerability
of armored military vehicles
Mathematical Applications Group, Inc. Report MAGI–6701 (1967)
[100] K. Hänssgen, R. Kirschner, J. Ranft, H. Wetzig √
Monte Carlo simulation of inelastic hadron-hadron reactions in the medium energy range ( s .3 GeV). De-
412 References
scription of the model used and of the Monte Carlo code Hadrin
University of Leipzig report KMU–HEP–79–07 (1979)
[101] K. Hänssgen, R. Kirschner, J. Ranft, H. Wetzig
Monte-Carlo simulation of inelastic hadron nucleus reactions. Description of the model and computer code
NUCRIN
University of Leipzig report KMU–HEP–80–07 (1980)
[102] K. Hänssgen, J. Ranft
Hadronic event generation for hadron cascade calculations and detector simulation I. Inelastic hadron nucleon
collisions at energies below 5 GeV
Nucl. Sci. Eng. 88, 537–550 (1984)
[103] K. Hänssgen, H-J. Möhring, J. Ranft
Hadronic event generation for hadron cascade calculations and detector simulation II. Inelastic hadron-nucleus
collisions at energies below 5 GeV
Nucl. Sci. Eng. 88, 551–566 (1984)
[104] K. Hänssgen, S. Ritter
The Monte Carlo code DECAY to simulate the decay of baryon and meson resonances
University of Leipzig report KMU–HEP–79–14 (1979)
Comp. Phys. Comm. 31, 411–418 (1984)
[105] K. Hänssgen, J. Ranft
The Monte Carlo code Hadrin to simulate inelastic hadron-nucleon interactions at laboratory energies below
5 GeV
Comp. Phys. Comm. 39, 37–51 (1986)
[106] K. Hänssgen, J. Ranft
The Monte Carlo code Nucrin to simulate inelastic hadron-nucleus interactions at laboratory energies below
5 GeV
Comp. Phys. Comm. 39, 53–70 (1986)
[107] M. Höfert, F. Coninckx, J.M. Hanon, Ch. Steinbach
The prediction of radiation levels from induced radioactivity: discussion of an internal dump target in the PS
CERN Divisional Report DI/HP/185 (1975)
[108] M. Höfert, A. Bonifas
Measurement odf radiation parameters for the prediction of dose-rates from induced radioactivity
CERN Internal Report HP–75–148 (1975)
[109] ICRU
Stopping powers for electrons and positrons
ICRU Report 37, Oxford University Press (1984)
[110] A.V. Ignatyuk, G.N. Smirenkin and A.S. Tishin
Phenomenological Description of the Energy Dependence of the Level Density Parameter
Yad. Fiz. 21, 485–490 (1975)
Sov. J. Nucl. Phys. 21, 255–257 (1975)
[111] Insoo Jun, Wousik Kim and R. Evans
Electron Nonionizing Energy Loss for Device Applications
IEEE Trans. Nucl. Sci. 56, 3229–3235 (2009)
[112] R. Jaarsma and H. Rief
TIMOC 72 code manual
Ispra report EUR 5016e (1973)
[113] F. James
A review of pseudorandom number generators
Comp. Phys. Comm. 60, 329–344 (1990)
[114] L. Kim, R.H. Pratt, S.M. Seltzer, M.J. Berger
Ratio of positron to electron bremsstrahlung energy loss: an approximate scaling law
Phys. Rev. A33, 3002–3009 (1986)
[115] H.W. Koch and J.W. Motz
Bremsstrahlung Cross-Section Formulas and Related Data
Rev. Mod. Phys. 314, 920–955 (1959)
[116] H. Kowalski, H.-J. Möhring, T. Tymieniecka
High speed Monte Carlo with neutron component — NEUKA
DESY 87–170 (1987)
[117] M. Krell and T.E.O. Ericson
Energy levels and wave functions of pionic atoms
Nucl. Phys. B11, 521–550 (1969)
References 413
Model
University of Leipzig report UL–HEP 92-09 (1992)
[135] H.-J. Möhring, J. Ranft, C. Merino and C. Pajares
String fusion in the Dual Parton Model and the production of antihyperons in heavy ion collisions
University of Leipzig report UL–HEP 92–10 (1992)
[136] G.Z. Molière
Theorie der Streuung schneller geladener Teilchen I–Einzelstreuung am abgeschirmten Coulomb-Feld
Z. Naturforsch. 2a, 133–145 (1947)
[137] G.Z. Molière
Theorie der Streuung schneller geladener Teilchen II — Mehrfach und Vielfachstreuung
Z. Naturforsch. 3a, 78–97 (1948)
[138] G.Z. Molière
Theorie der Streuung schneller geladener Teilchen III — Die Vielfachstreuung von Bahnspuren unter
Berücksichtigung der statistischen Kopplung
Z. Naturforsch. 10a, 177–211 (1955)
[139] N.F. Mott
The scattering of fast electrons by atomic nuclei
Proc. R. Soc. London A124, 425–442 (1929)
[140] W.R. Nelson, H. Hirayama, D.W.O. Rogers
The Egs4 code system
SLAC–265 (1985)
[141] R.E. MacFarlane and A.C. Kahler
Methods for Processing ENDF/B-VII with NJOY
Nucl. Data Sheets 111 2739–2890 (2010)
http://t2.lanl.gov/codes/
[142] K. Nakamura et al. (Particle Data Group)
Review of particle physics
J. Phys. G37, 075021 (2010)
http://pdg.lbl.gov/
[143] M. Pelliccioni
Overview of fluence-to-effective dose and fluence-to-ambient dose equivalent conversion coefficients for high
energy radiation calculated using the FLUKA code
Radiation Protection Dosimetry 88 (2000) 279-297
[144] L.I. Ponomarev
Molecular Structure Effects on Atomic and Nuclear Capture of Mesons
Annual Review of Nuclear Science, 23, 395–430 (1973)
[145] R.E. Prael and H. Lichtenstein
User Guide to LCS: the LAHET Code System
Los Alamos Report LA–UR–89–3014 (1989)
[146] R.E. Prael, A. Ferrari, R.K. Tripathi, A. Polanski
Comparison of nucleon cross section parameterization methods for medium and high energies
Proc. 4th Workshop on Simulating Accelerator Radiation Environments (SARE4), 14–16 September 1998,
Knoxville (Tenn.), p. 171–181
[147] R.E. Prael, A. Ferrari, R.K. Tripathi, A. Polanski
Plots supplemental to: “Comparison of nucleon cross section parameterization methods for medium and high
energies”
Los Alamos report LA–UR–98–5843 (1998)
[148] S. Qian and A. Van Ginneken
Characteristics of inelastic interactions of high energy hadrons with atomic electrons
Nucl. Instr. Meth. A256, 285–296 (1987)
[149] J. Ranft
Monte Carlo calculation of the nucleon-meson cascade in shielding materials initiated by incoming proton beams
with energies between 10 and 1000 GeV
CERN Yellow Report 64–47 (1964)
[150] J. Ranft
Improved Monte-Carlo calculations of the nucleon-meson cascade in shielding materials
CERN Report MPS/Int. MU/EP 66–8 (1966)
[151] J. Ranft
Improved Monte-Carlo calculation of the nucleon-meson cascade in shielding material I. Description of the
method of calculation
References 415
[167] J. Ranft
The diffractive component of particle production in the dual multistring fragmentation model
Z. Phys. C33, 517–523 (1987)
[168] J. Ranft, W.R. Nelson
Hadron cascades induced by electron and photon beams in the GeV energy range
Nucl. Instr. Meth. A257, 177–184 (1987)
SLAC–PUB–3959 (1986)
[169] J. Ranft
Hadron production in hadron-nucleus and nucleus-nucleus collisions in the dual Monte Carlo multichain frag-
mentation model
Phys. Rev. D37, 1842–1850 (1988)
[170] J. Ranft
Hadron production in hadron-nucleus and nucleus-nucleus collisions in a dual parton model modified by a
formation zone intranuclear cascade
Z. Phys. C43, 439–446 (1988)
[171] J. Ranft and S. Roesler
Single diffractive hadron-nucleus interactions within the Dual Parton Model
Z. Phys. C62, 329–396 (1994)
[172] J. Ranft
Dual parton model at cosmic ray energies
Phys. Rev. D51, 64–84 (1995)
[173] J. Ranft
33 years of high energy radiation Monte Carlo calculations in Europe as seen from CERN
Proc. 2nd Workshop on Simulating Accelerator Radiation Environments (SARE2), CERN, Geneva, 9–11 Oc-
tober 1995
CERN/TIS–RP/97–05 (1997), p. 1–13
[174] R. Ribberfors
Relationship of the relativistic Compton cross section to the momentum distribution of bound electron states
Phys. Rev. B12, 2067–2074 (1975)
Erratum: Phys. Rev. B13, 950 (1976)
[175] S. Ritter, J. Ranft
Simulation of quark jet fragmentation into mesons and baryons on the basis of a chain decay model
University of Leipzig report KMU–HEP–79–09 1979
Acta Phys. Pol. B11, 259–279 (1980)
[176] S. Ritter
QCD jets in e+ e− −annihilation and the transition into hadrons
Z. Phys. C16, 27–38 (1982)
[177] S. Ritter
Monte-Carlo code Bamjet to simulate the fragmentation of quark and diquark jets
University of Leipzig report KMU–HEP 83–02 (1983)
Comp. Phys. Comm. 31, 393–400 (1984)
[178] S. Ritter
Monte-Carlo code Parjet to simulate e+ e− -annihilation events via QCD Jets
Comp. Phys. Comm. 31, 401–409 (1984)
[179] S. Roesler, R. Engel and J. Ranft
The single diffractive component in hadron-hadron collisions within the two-component Dual Parton Model
Z. Phys. C59 481–488 (1993)
[180] S. Roesler, R. Engel and J. Ranft
The Monte Carlo event generator DPMJET-III,
Proc. Monte Carlo 2000 Conference, Lisbon, October 23–26 2000, A. Kling, F. Barão, M. Nakagawa, L. Távora,
P. Vaz eds., Springer-Verlag Berlin, p. 1033–1038, 2001.
[181] S. Roesler and G.R. Stevenson
deq99.f - A FLUKA user-routine converting fluence into effective dose and ambient dose equivalent
Technical Note CERN-SC-2006-070-RP-TN, EDMS No. 809389 (2006)
[182] K. Roeed, M. Brugger and C. Pignard
PTB irradiation tests of the LHC radiation monitor (RadMon)
CERN ATS Note 2011–012, 2011
http://cds.cern.ch/record/1329478?ln=es
[183] K. Roeed, M. Brugger, D. Kramer et al.
Method for measuring mixed field radiation levels relevant for SEEs at the LHC
References 417
419
420 References
DEFAULTS input command, 32, 49, 51, 58–61, 92, 98, 99, distance
124 minimum from a boundary, 7, 395
missing, 58, 59 unit, 48
defaults, 13, 50, 53, 56, 58, 320 distant collisions, 5
overriding, 58 distributed
resetting, 50 energy
sets, 58 change of position, 351
settings, 92, 395 density, 40
deflection angle, 5 fluence, 40
in an electric field, maximum, 106 track-length, change of position, 352
delayed radiation, 404 distributions of particles
delta ray, 5, 6, 39, 53, 60, 98, 107, 146, 308, 361, 396 emerging from inelastic interactions, 268
production, 87, 92–95 entering an inelastic interactions, 268
threshold, 32, 60, 310 DIVBM, 71
stack overflow, 359 divergence, 74
DELTARAY input command, 32, 60, 92–95, 98, 205, 307 DNEAR, 7, 141, 142, 303, 311, 395
density, 16, 46, 163, 307 DO-loop in input, 50
effect, 5, 18, 55, 60, 164, 165, 234, 308, 396 Doppler broadening, 6, 399
effective, 165, 396 dose, 226, 249
for transport, 167 binning, 254
fictitious, 54, 165 rate, 89
inhomogeneous, 54 scoring, 62
local microscopic, 165 modifier, 64
microscopic different from macroscopic, 164 to commercial flights, 394
partial, 307 dose equivalent, 19, 48, 53, 68, 249
physical, 167 ambient, 68
scaling, 53, 87 binning, 254
DESY, 394 calculated with a QF, 251
DETECT input command, 62, 101, 307, 313 calculated with conversion coefficients, 251
output, 313 calculated with a QF, 250
detector, 11, 61, 89, 91 scoring, 64
design, 3 dose equivalent binning
numbering, 90, 307, 349, 353 calculated with a QF, 252
regions, 101 DOSE–EQ scoring, 68, 252
studies, 10 DOSEQLET scoring, 69
detectors dosimetry, 3, 10, 394, 399
number of, 9 phantom, 299
deuterium, 2.2 MeV transition, 321 double differential
DFFCFF user routine, 188, 191, 351 cross section, 267
differences of bodies, 294 fluence/current distribution, 247
differential distribution, 29 yield, 267
diffraction, 400 double precision, 3, 9, 394
antibaryon, 401 downscattering
low mass, 401 biasing, 8, 64, 156, 307, 403
model, 401 importances, 156
diffractive events, 393 matrix, 160, 305, 320, 321
diffusive reflectivity, 334 probability, 156
DIMPAR INCLUDE file, 279, 349 biasing, 54, 156
direction dp/dx
biasing tabulation, 92–95, 98
λ, 371 accuracy, 99
of decay neutrinos, 403 limits, 99
user-defined, 8, 371 DPA, 5, 58, 165, 249
cosines, 333, 354, 356, 367, 373 binning, 250–252, 254
of the magnetic field, 356 energy threshold, 166
DISCARD input command, 32, 58, 61, 104, 107, 306 DPBEAM, 71
discarded particles, 53, 58, 104, 107, 306 DPM, 4, 398, 400–402
discarding heavy particles, 104 Dpmjet, 5, 37, 39, 148, 404
displacement initialisation, 404
lateral, 5 library, 37
longitudinal, 5 threshold, 202, 203
Displacements Per Atom, see DPA Dpmjet-2.5, 404
References 425
Dpmjet-3, 405 shielding, 3, 33, 56, 58, 63, 96, 398, 403
Dpmjet-II.5, 381 and photon
Dresner L., 397, 402 cutoffs, 307
drift, 351, 352 multiple scattering, 307
of an instrument, 64 transport, 397
Dual Parton Model, 4, 393, 400 dE/dx, 118
dummy routines, 39 production cutoff, 113, 116
dump, 40, 331 step, 53, 118
dumping transport, 107
energy deposition events, 331 cutoff, 113
source particles, 331 lowest limit, 6
trajectories, 331 transport cutoff, 116
dxf format, 34 electronuclear
dynamical interactions, 54
allocation of storage, 209 reactions, 198
dimensions, 3 electronuclear interaction, 4
memory allocation, 9, 304, 394, 404 ELECUT, 114
dynamically opened file, 248, 255, 258, 263 ELL body, 285
ellipsoid, 285
E.U., 394 elliptical cylinder, 7
E1, E2 and M1 transitions, 402 EM–CASCAde default, 61, 93
Ea-Mc, 394 EMF input command, 61, 92–95, 99, 102, 107, 113
Earth Emf, 6, 107, 161, 397
dipole field, 136 EMF–BIAS input command, 33, 63, 81, 102, 108, 214
equatorial magnetic field, 136 emfadd, 37
geomagnetic field, 381 EMFCUT input command, 18, 32, 59–61, 94, 95, 99, 102,
geometry, 381 110, 113, 113
multipole field, 136 EMFFIX input command, 59, 92–95, 118
radius, 136 EMFFLUO input command, 61, 92–94, 120, 308
Ecut, 116 EMFRAY input command, 61, 92–94, 122, 308
edge fine structure, 7 EMFSCO routine, 333, 358, 359, 361
EET, 92, 93, 401 EMFSTK COMMON, 349, 361, 368
EET/TRANsmut default, 93 emission of energetic light fragments, 401
effective Empl A., 404, 405
atomic number, 17 END
charge, 44, 404 body line, 294
Z, 307, 396 region line, 295–297
Z/A, 308 ENDF/B, 322, 399
effective dose, 68 ENDRAW user entry, 359, 368
efficiency, 3, 59 ENDSCP user routine, 64, 244, 351
Egs4, 110, 142, 393, 395–398, 403 ENEA, 322, 394, 399
elastic Bologna, 399, 402
cross section energy
file, 38 amplifier, 10, 394, 399, 402
interaction, 361 and momentum conservation, 400
recoil, 333, 359 balance, 54, 56, 126, 310, 314
scattering, 4, 6, 322, 397 budget, 33
length, 18, 307 conservation, 394
ELCFIELD input command, 106 crossing a surface, 247
electric field, 5, 8, 53, 106, 213 cutoff, 11, 32, 54, 59, 398
assignment to regions, 66 density, 9, 19, 54, 227, 244, 245, 249, 309
unit, 48 binning, 250–252, 254
electromagnetic deposited, 55, 392
calculations, 58 by electrons and positrons, 310
cascade, 92, 93, 392 by light ion recoils, 310
parameterised, 393 by low-energy neutrons, 310
dissociation, 4, 202, 204, 404 by nuclear recoils, 310
preprocessor, 398 in vacuum, 197
showers, 33, 307, 310 ionisation, 310
ElectroMagnetic Fluka, see Emf per event, 101
electron, 6 total, 310
accelerator, 10 deposition, 6, 53, 226, 249, 255
426 References
ions, 68 libraries, 35
scoring estimators, 68 license, 40, 303
first call main, 39
informative message, 348 materials, 17
initialisation, 366 medium number, 304
fissile materials, 310 modules, 37, 39
fission, 4, 6, 107, 400, 402, 404 neutron cross section libraries, 322
density, 9, 227, 249 second generation, 392
binning, 250, 254 source code, 35, 37
fragments, 44, 322 third generation, 393
high energy, 45 website, 11, 21, 35, 62
low energy, 45, 323 Fluka-Dpmjet
neutron, 6, 321 switch energy, 203
average number, 305 fluka.stop, 231
multiplicity, 160, 322 Fluka81, 393
production, 305 Fluka82, 393
probability Fluka86, 39, 393
group-dependent, 322 Fluka86-87, 10
RAL model, 402 Fluka87, 393–396, 398, 405
routines, 39 flukaadd, 37
spectrum, 322 flukadpm, 37
yield file, 38 FLUKAFIX input command, 32, 59, 92–95, 118, 133
fits, parameterised, 4 flukahp, 37
FIXED input command, 132, 134 FlukaN, 399
fixed format, 49 Fluktuierende Kaskade, 392
input, 132 Flunev, 398
FLABRT routine, 231 fluorescence, 7, 53, 61, 92–95, 120, 308, 397
Flair, 34, 302 and Auger routines, 39
FLDSCP user routine, 64, 244, 352 data file, 38
FLEUR, 404 FLUPRO, 24
flexibility, 3 FLUSCW user routine, 64, 90, 191, 244, 252, 307, 335, 349,
FLKMAT COMMON, 349, 350 352, 355
FLKSTK COMMON, 349, 360, 361, 363, 366–368 Fokker-Planck diffusion equation, 382
FLNRRN routine, 364 Force Field Approximation, 382
floating point form factor, 5, 7, 38, 40, 92–95, 174, 176, 353, 396
body data, 279 in δ-ray production, 202
geometry data, 303 Thomas-Fermi, 176
FLOOD, 78 format
Florida State University, 394 high precision, 14
Flrn64, 217 name based, 278
FLRNDM routine, 364 formation zone, 401
fluctuations, 3, 33, 54, 63, 82, 142, 155, 404 FORMFU user routine, 176, 353
fluence, 9, 19, 244, 249, 251, 257, 262 Fortran 90, 39
binning, 254 OPEN statement, 181
differential energy spectrum, 62 four-momentum, 265
distributions, differential, 258, 263 fragmentation
double-differential angle-energy spectrum, 62 of small nuclei, 402
estimators, 393, 404 region, 401
integral, 258, 263 FREE input command, 12, 49, 58, 132, 134, 142
scoring, 62, 246 free
modifier, 64 electron lasers, 10
to dose conversion coefficients, 68 format, 12, 49, 58, 137, 278, 280
Flugg, 137, 395 in geometry input, 142
Fluka input, 54, 134, 141
applications, 10 region input, 294, 296
data files, 38 parameters, 3
distribution frequency, 334
tar file, 37 FRGHNS user routine, 186, 191, 334, 354, 373
earlier versions, 9 Frisoni M., 399
executable, 37, 38 FUDGEM parameter, 113
first generation, 392 fully analogue run, 142
history, 392 FUSRBV user routine, 250, 251, 253, 354, 356
428 References
RAL high-energy fission model, 402 allowing more than 10000 regions, 294
random number fixed format allowing more than 10000 regions, 279
gaussian-distributed, 39 importance, 370, 403
generator, 54, 217, 364, 394 input echo, 303
calls, 217, 231, 308 name, 7, 296
initialisation, 217 numbers, 295
routines, 39 table, 279
independent sequences, 394 volumes, 19, 21, 227, 297, 303
initialisation, 11 regions, 11, 56, 138, 277
seeds, 217, 311 max. number, 7
file, 181, 303 relativistic kinematics, 402
sequence, 22, 58, 65, 141, 142 Relativistic Quantum Molecular Dynamics, 405
independent, 65 RELEASE-NOTES, 35
skipping, 127 rem-counters, 156
random.dat, 21 reproducibility
RANDOMIZe input command, 22, 65, 127, 217 of runs, 141
Ranft J., 4, 7, 9, 392, 393, 400, 401, 404, 405 of the random number sequence, 141, 304
ranging out, 59, 395, 396 residual dose, 89, 91
of charged particles below threshold, 197 residual nuclei, 4, 9, 54, 208, 218, 244, 374, 403
ranging out below cutoff, 5 data, 322
Ranmar, 394 distributions, 402
rapidity, 21, 265, 404 information, 325
rare interactions, 63 in the neutron library, 305
RAW body, 285 printing, 160
RAY, 8, 43, 44, 73, 319, 364, 375 production, 402
output, 319 by low-energy neutrons, 322
ray number, 376 scoring, 40, 62, 399, 404
Rayleigh scattering, 7, 53, 61, 92–95, 122, 213, 308, 334, modifier, 64
339, 361, 398 user handling, 374
RCC body, 282 RESNUC COMMON, 349
reactions RESNUCLEi input command, 21, 32, 62, 89, 218, 307, 315
endothermic, 127 output, 315
exothermic, 127 resonance model, 4, 400
README file, 35 resonances, non resolved, 321
READONLY, 181 response functions, 40
REC body, 283 restricted ionisation fluctuations, 60
recoil Restricted NIEL, 46
protons, 161, 304 results, 55
recombination, 340 RFLCTV user routine, 191, 334, 362
rectangular parallelepiped, 280 rfluka, 22, 24, 35, 38, 181, 303, 311, 348
general, 281 temporary subdirectory, 24, 25, 38
redirection symbols, 38 rfluka.stop, 231
reduced storage, 124 RFRNDX user routine, 188, 191, 334, 363
reference frame, for a beam, 56 RHEL, 392
reflection, 298, 401 RHOR factor, 165
at boundaries, 334 Rief H., 209, 395
coefficient, 334 right
reflectivity, 191, 334 angle wedge, 285
diffusive, 191 circular cylinder, 282
index, 188 elliptical cylinder, 283
specular, 191 Roesler S., 404, 405
user-defined, 362 Root, 404
refraction, 167, 334, 373, 401 Rossendorf, 394, 395
at boundaries, 334 ROT–DEFI input command, 57
index, 188, 191, 334, 336, 339 ROT–DEFIni input command, 62, 221, 254, 293
derivatives, 188 rotation, 298
user-defined, 363 transformation, 356
reggeon, 400 rotation/translation
region matrix, 224
binning, 227, 249–251, 255 transformations, 62
data, 294 roto-translation
fixed format of body coordi nates, 293
438 References
ROTPRBIN input command, 57, 62, 124, 224, 254 of particle yields, 404
Routti J., 9, 392 options, 61
RPP body, 280 routines, 39
Rqmd, 5, 37, 39, 148, 405 weighting, 244
fast cascade, 202 scratch files, 181, 303, 311
library, 37 screening corrections for beta decay, 215
preequilibrium step, 202 screening factor, 5
threshold, 202, 203 SDUM parameter, 12, 49
Rqmd-2.4, 404 sea quark distributions, 400
Rqmd-Dpmjet secondaries
switch energy, 203 in a collision, number of, 80
RR, 271 in inelastic hadron interactions, 310
counters, 33 in low-energy neutron interactions, 310
level, 271 secondary
RR/Splitting counters, 309 neutron production, 321
RRHADR, 80 particle, 357
Rubbia A., 402 stack, management, 357
Rubbia C., 394, 399, 401, 402 self-consistency, 395
run self-shielded cross sections, 321
documentation, 13 self-shielding, 321
sequential number, 38 Seltzer S.M., 6, 165, 175, 397
Russian Roulette, 8, 33, 53, 55, 63, 64, 80, 82, 110, 151, semi-analogue mode for radioactive decays, 73, 89, 214,
270, 275, 307, 403 215
inhibiting, 82 separators, 141, 280
Rutherford E., 5 in free format, 134, 142
setting options, 58
Sala P.R., 4, 7, 9, 392, 394–403 settings, 11
sampling from a non-monoenergetic spectrum, 73 shell
Sandberg J., 392, 393 corrections, 5, 307, 396
Sauter F., 7 model, 401
scaling, 401 SHIELDINg default, 95
laws, 3 shielding, 10, 82, 394, 395
scattering calculations, 63, 95
angles, printing, 160 design, 33
Bhabha, 6, 398 signal, 101
incoherent, 353 silicon damage weighting functions, 38
kaon-proton, 4 single
low energy neutrons, 8 chains, 400
Møller, 6, 398 interaction level, 3
multiple, 353 isotope yields, 226, 255
non performed, 309 precision, 127, 224
on hydrogen nuclei, 4 single scattering, 5, 6, 8, 54, 59, 174, 307, 309, 398
pion-proton, 4 activated everywhere, 176
probabilities, printing, 160 number of steps, 174
single, 6, 353 number when crossing a boundary, 176
suppression, 5 option, 175
transfer probability, 320 skipping random numbers, 217
two-body, 401 SLAC, 394
scintillation, 340 SLATEC, 39
light, 336 smeared mass distributions, 400
photon, 334 smooth approach to boundaries, 395
production, 398 SODRAW user entry, 360, 368
radiation, 183, 334 SOEVSV user routine, 363, 366, 368
scintillators, 236, 396 Solar
SCOHLP COMMON, 307, 349, 350, 353 Particle Event, 381
SCORE input command, 19, 61, 126, 226, 309 solar
scoring, 9, 10, 61, 404 activity, 381, 382
by region, 226 maximum, 381
conditional, 245, 353 minimum, 381
detectors, 89 modulation, 382
during irradiation, 149 wind, 381, 382
normalisation, 127 solar flares, 39
References 439
solar particle event, 136 splitting, 8, 33, 53, 55, 63, 64, 80, 82, 110, 270, 271, 275,
solid angle, 247 403
unit, 48 inhibiting, 82
Sorge H., 405 level, 271
SOUEVT COMMON, 349, 363 splitting/RR counters, 64
SOURCE input command, 14, 56, 64, 228 SPS, 392
SOURCE user routine, 14, 37, 55, 56, 64, 71–74, 76, 101, SPY, 401
228, 248, 255, 258, 263, 268, 306, 331, 333, 336, square transverse momentum, 265
360, 363, 366, 368, 372, 375 SSC, 393, 403
source, 55 stack, 333, 361, 363, 367
biased, 366 full, 153
Cartesian shell, 77 index, 364
coordinates, 333 management, 40
cylindrical shell, 77 of secondaries, 357
distribution, 56 pointer, 360, 364, 366
events, 40 snapshot, 363
saving, 363 user variables, 368
extended in space, 14, 76, 77 variables, assigning, 363, 364
isotropic, 14, 71, 73, 79 standard
linear, 364 deviation, 61
uniformly distributed, 368 Fluka neutron library, 161
particles, 40, 241, 360, 372 input, 181
dumping, 360 output, 58, 181, 217, 218, 238–240, 252, 255, 257,
reading from a file, 363 258, 262, 263, 266, 268, 278, 331
sampling from a biased distribution, 365 star, 55, 60, 62, 226, 249, 254, 310
sampling from a generic distribution, 364 density, 9, 19, 54, 60, 62, 226, 244, 245, 249, 252,
309, 392
sampling from a uniform distribution, 364
binning, 250, 254
producing an isotropic fluence, 78
number of, 309
routine, 14
scoring, 254
sampling algorithm, 363
threshold, 60
spatially extended, 76
total weight, 309
special, 230
START input command, 22, 50, 65, 231, 308
spherical shell, 76
start of the job, 231
user written, 363
starting
SOURCM COMMON, 349, 360, 365
signal, 11
South Atlantic Anomaly, 384
the calculation, 65
space radiation, 404 statistical
spaghetti calorimeters, 8 convergence, 33
spallation, 60 error, 28
products, 218 statistics, 61
sources, 394 final global, 309
SPAROK, 368 step, 106, 118, 133, 172, 232, 371
spatial mesh, 249, 254 control, 5
SPAUSR, 361 cut at a boundary, 175
specific activity, 45 deflections, 397
binning, 250, 254 endpoints, 351, 352
SPECSOUR input command, 56, 72, 136, 230, 378, 381, length, 59
385, 390 optimisation, 9
spectrum minimum in an electric field, 106
channels, 102 optimisation, 142, 171, 174, 175
tail, 365 reflected from a boundary, 175
specular reflectivity, 334 size, 11, 55, 232
speed of light, 376 absolute, 232
SPH body, 282 by region, 232
sphere, 282 in vacuum, 232
spherical geometry, 393 independent of bin size, 9
spin maximum, 232, 308
-relativistic minimum, 232, 308
corrections, 5, 174 stretching factor, 175
effects, 5 STEPSIZE input command, 32, 59, 118, 172, 232, 308
effects, 5 STERNHEIme input command, 49, 60, 234
440 References
XBEAM, 76, 78
XCC body, 290
XEC body, 290
xenon capture gammas, 321, 399
Xsec medium number, 304
XSPOT, 71
XYP body, 288
XZP body, 288
YBEAM, 76, 78
YCC body, 290
YEC body, 290
yield, 9, 244
double differential, 265
scoring, 55, 62
vs. time, 265
yrast line, 402
YSPOT, 72
YZP body, 288