Java 3D API Documentation
Java 3D API Documentation
API Specification
JavaSoft
A Sun Microsystems, Inc. Business
901 San Antonio Road
Palo Alto, CA 94303 USA
415 960-1300 fax 415 969-9131
1997, 1998, 1999, 2000 Sun Microsystems, Inc.
901 San Antonio Road, Palo Alto, California 94303 U.S.A.
All rights reserved.
RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the United States
Government is subject to the restrictions set forth in DFARS 252.227-7013 (c)(1)(ii) and
FAR 52.227-19.
The release described in this document may be protected by one or more U.S. patents, for-
eign patents, or pending applications.
Sun Microsystems, Inc. (SUN) hereby grants to you a fully paid, nonexclusive, nontrans-
ferable, perpetual, worldwide limited license (without the right to sublicense) under
SUN’s intellectual property rights that are essential to practice this specification. This
license allows and is limited to the creation and distribution of clean-room implementa-
tions of this specification that (i) are complete implementations of this specification, (ii)
pass all test suites relating to this specification that are available from SUN, (iii) do not
derive from SUN source code or binary materials, and (iv) do not include any SUN binary
materials without an appropriate and separate license from SUN.
Java, JavaScript, and Java 3D are trademarks of Sun Microsystems, Inc. Sun, Sun Micro-
systems, the Sun logo, Java and HotJava are trademarks or registered trademarks of Sun
Microsystems, Inc. UNIX® is a registered trademark in the United States and other coun-
tries, exclusively licensed through X/Open Company, Ltd. All other product names men-
tioned herein are the trademarks of their respective owners.
THIS PUBLICATION IS PROVIDED “AS IS” WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, OR NON-INFRINGEMENT.
THIS PUBLICATION COULD INCLUDE TECHNICAL INACCURACIES OR TYPO-
GRAPHICAL ERRORS. CHANGES ARE PERIODICALLY ADDED TO THE INFOR-
MATION HEREIN; THESE CHANGES WILL BE INCORPORATED IN NEW
EDITIONS OF THE PUBLICATION. SUN MICROSYSTEMS, INC. MAY MAKE
IMPROVEMENTS AND/OR CHANGES IN THE PRODUCT(S) AND/OR THE PRO-
GRAM(S) DESCRIBED IN THIS PUBLICATION AT ANY TIME.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
1 Introduction to Java 3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
1.2 Programming Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
1.2.1 The Scene Graph Programming Model . . . . . . . . . . . . . . .2
1.2.2 Rendering Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
1.2.3 Extensibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
1.3 High Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
1.3.1 Layered Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . .4
1.3.2 Target Hardware Platforms . . . . . . . . . . . . . . . . . . . . . . . .4
1.4 Support for Building Applications and Applets . . . . . . . . . . . . . . . . . . .5
1.4.1 Browsers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
1.4.2 Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
1.5 Overview of Java 3D Object Hierarchy. . . . . . . . . . . . . . . . . . . . . . . . . .6
1.6 Structuring the Java 3D Program. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
1.6.1 Java 3D Application Scene Graph . . . . . . . . . . . . . . . . . . .6
1.6.2 Recipe for a Java 3D Program . . . . . . . . . . . . . . . . . . . . . .8
1.6.3 HelloUniverse: A Sample Java 3D Program . . . . . . . . . . .9
2 Java 3D Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1 Basic Scene Graph Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
2.1.1 Constructing a Simple Scene Graph. . . . . . . . . . . . . . . . .12
2.1.2 A Place For Scene Graphs . . . . . . . . . . . . . . . . . . . . . . . .12
2.1.3 SimpleUniverse Utility . . . . . . . . . . . . . . . . . . . . . . . . . . .15
2.1.4 Processing a Scene Graph. . . . . . . . . . . . . . . . . . . . . . . . .15
2.2 Features of Java 3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
2.2.1 Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
2.2.2 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17
2.2.3 Live and/or Compiled. . . . . . . . . . . . . . . . . . . . . . . . . . . .17
D Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
D.1 BadTransformException . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .537
D.2 CapabilityNotSetException . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .538
D.3 DanglingReferenceException . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .538
D.4 IllegalRenderingStateException . . . . . . . . . . . . . . . . . . . . . . . . . . . . .539
D.5 IllegalSharingException . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .539
D.6 MismatchedSizeException . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .540
D.7 MultipleParentException . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .540
D.8 RestrictedAccessException . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .540
D.9 SceneGraphCycleException . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .541
D.10 SingularMatrixException. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .541
D.11 SoundException. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .542
E Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
E.1 Fog Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .543
E.2 Lighting Equations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .544
E.3 Sound Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .546
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
Figure 10-3 An Interpolator Set to a Loop Count of 1 with Mode Flags Set to Enable Only
the Alpha-Decreasing and Alpha-at-0 Portion of the Waveform . . . . . . 287
Figure 10-4 An Interpolator Set to a Loop Count of 1 with Mode Flags Set to Enable All
Portions of the Waveform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
Figure 10-5 An Interpolator Set to Loop Infinitely and Mode Flags Set to Enable Only the
Alpha-Increasing and Alpha-at-1 Portion of the Waveform . . . . . . . . . . 288
Figure 10-6 An Interpolator Set to Loop Infinitely and Mode Flags Set to Enable Only the
Alpha-Decreasing and Alpha-at-0 Portion of the Waveform . . . . . . . . . 288
Figure 10-7 An Interpolator Set to Loop Infinitely and Mode Flags Set to Enable All
Portions of the Waveform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Figure 10-8 How an Alpha-Increasing Waveform Changes with Various Values of
increasingAlphaRampDuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Figure 14-1 Minimal Immediate-Mode Structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
Figure A-1 Math Object Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
Figure B-1 A Generalized Triangle Strip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
Figure B-2 A Generalized Triangle Strip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Figure B-3 Encoding of the Six Sextants of Each Octant of a Sphere . . . . . . . . . . . 470
Figure B-4 Sextant Coordinates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
Figure B-5 Sextant Neighbors and Their Relationships . . . . . . . . . . . . . . . . . . . . . . 472
Figure B-6 Bit Layout of Compressed Geometry Instructions . . . . . . . . . . . . . . . . . 478
Figure C-1 Display Rigidly Attached to the Tracker Base . . . . . . . . . . . . . . . . . . . . 512
Figure C-2 Display Rigidly Attached to the Head Tracker (Sensor). . . . . . . . . . . . . 514
Figure C-3 A Portion of a Scene Graph Containing a Single Screen3D Object . . . . 520
Figure C-4 A Single-Screen Display Environment . . . . . . . . . . . . . . . . . . . . . . . . . . 520
Figure C-5 A Portion of a Scene Graph Containing Three Screen3D Objects . . . . . 521
Figure C-6 A Three-Screen Display Environment . . . . . . . . . . . . . . . . . . . . . . . . . . 521
Figure C-7 The Camera-based View Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
Figure C-8 A Perspective Viewing Frustum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
Figure C-9 Perspective View Model Arguments. . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
Figure C-10 Orthographic View Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
Figure E-1 Signal to Only One Ear Is Direct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
Figure E-2 Signals to Both Ears Are Indirect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
Figure E-3 ConeSound with a Single Distance Gain Attenuation Array . . . . . . . . . 550
Figure E-4 ConeSound with Two Distance Attenuation Arrays . . . . . . . . . . . . . . . . 550
T HIS document describes the Java 3D™ API, version 1.2, and presents some
details on the implementation of the API. This specification is not intended as a
programmer’s guide.
This specification is written for 3D graphics application programmers. We assume
that the reader has at least a rudimentary understanding of computer graphics. This
includes familiarity with the essentials of computer graphics algorithms as well as
familiarity with basic graphics hardware and associated terminology.
Related Documentation
This specification is intended to be used in conjunction with the browser-accessi-
ble, javadoc-generated API reference.
Style Conventions
The following style conventions are used in this specification:
• Lucida type is used to represent computer code and the names of files and
directories.
• Bold Lucida type is used for Java 3D API declarations.
• Bold type is used to represent variables.
• Italic type is used for emphasis and for equations.
Programming Conventions
Java 3D uses the following programming conventions:
• The default coordinate system is right-handed, with +Y being up, +X
horizontal to the right, and +Z directed toward the viewer.
Acknowledgments
We gratefully acknowledge Warren Dale for writing the Sound API portion of this
specification and Daniel Petersen for writing the scene graph sharing portion of the
specification. We especially acknowledge Bruce Bartlett for his invaluable assis-
tance with the editing, formatting, and indexing of the specification. Without
Bruce’s considerable help, this book would not have been possible.
We also thank the many individuals and companies that provided comments and
suggestions. They have improved the Java 3D API.
Henry Sowizral
Kevin Rushforth
Michael Deering
Sun Microsystems, Inc.
April 2000
1.1 Goals
Java 3D was designed with several goals in mind. Chief among them is high per-
formance. Several design decisions were made so that Java 3D implementations
can deliver the highest level of performance to application users. In particular,
when trade-offs were made, the alternative that benefited runtime execution was
chosen.
Other important Java 3D goals are to
Version 1.2, March 2000 1
1.2 Programming Paradigm INTRODUCTION TO JAVA 3D
1.2.3 Extensibility
Most Java 3D classes expose only accessor and mutator methods. Those methods
operate only on that object’s internal state, making it meaningless for an applica-
tion to override them. Therefore, Java 3D does not provide the capability to over-
ride the behavior of Java 3D attributes. To make Java 3D work correctly,
applications must call “super.setXxxxx” for any attribute state set method that
is overridden.
Applications can extend Java 3D’s classes and add their own methods. However,
they may not override Java 3D’s scene graph traversal semantics because the
nodes do not contain explicit traversal and draw methods. Java 3D’s renderer
retains those semantics internally.
Java 3D does provide hooks for mixing Java 3D–controlled scene graph render-
ing and user-controlled rendering using Java 3D’s immediate mode constructs
1.4.1 Browsers
Today’s Internet browsers support 3D content by passing such data to plug-in 3D
viewers that render into their own window. It is anticipated that, over time, the
display of 3D content will become integrated into the main browser display. In
fact, some of today’s 3D browsers display 2D content as 2D objects within a 3D
world.
1.4.2 Games
Developers of 3D game software have typically attempted to wring out every last
ounce of performance from the hardware. Historically they have been quite will-
ing to use hardware-specific, nonportable optimizations to get the best perfor-
mance possible. As such, in the past, game developers have tended to program
below the level of easy-to-use software such as Java 3D. However, the trend in
3D games today is to leverage general-purpose 3D hardware accelerators and to
use fewer “tricks” in rendering.
So, while Java 3D was not explicitly designed to match the game developer’s
every expectation, Java 3D’s sophisticated implementation techniques should
provide more than enough performance to support many game applications. One
might argue that applications written using a general API like Java 3D may have
a slight performance penalty over those employing special, nonportable tech-
niques. However, other factors such as portability, time to market, and develop-
ment cost must be weighed against absolute peak performance.
javax.media.j3d
VirtualUniverse
Locale
View
PhysicalBody
PhysicalEnvironment
Screen3D
Canvas3D (extends awt.Canvas)
SceneGraphObject
Node
Group
Leaf
NodeComponent
Various component objects
Transform3D
javax.vecmath
Matrix classes
Tuple classes
VirtualUniverse Object
Locale Object
BG BG BranchGroup Nodes
Below the VirtualUniverse object is a Locale object. The Locale object defines
the origin, in high-resolution coordinates, of its attached branch graphs. A virtual
while(true) {
Process input
If (request to exit) break
Perform Behaviors
Traverse the scene graph and render visible objects
}
Cleanup and exit
return objRoot;
}
public HelloUniverse() {
<construct canvas3d, set layout of applet, create canvas>
Java 3D program first constructs a scene graph, then, once built, hands that scene
graph to Java 3D for processing.
The structure of a scene graph determines the relationships among the objects in
the graph and determines which objects a programmer can manipulate as a single
entity. Group nodes provide a single point for handling or manipulating all the
nodes beneath it. A programmer can tune a scene graph appropriately by thinking
about what manipulations an application will need to perform. He or she can make
a particular manipulation easy or hard by grouping or regrouping nodes in various
ways.
The code next constructs a group node to hold the two leaf nodes. It uses the Group
node’s addChild method to add the two leaf nodes to the group node as children,
finishing the construction of the scene graph. Figure 2-1 shows the constructed
scene graph, all the nodes, the node component objects, and the variables used in
constructing the scene graph.
Java 3D places restrictions on how a program can insert a scene graph into a uni-
verse.
myGroup
Group
myShape1 myShape2
VirtualUniverse Object
Locale Object
Content nodes
VP View
ViewPlatform Object
Other Objects
The BranchGroup node serves as the root of a branch graph. Collectively, the
BranchGroup node and all of its children form the branch graph. The two kinds of
branch graphs are called content branches and view branches. A content branch
contains only content-related leaf nodes while a view branch contains a ViewPlat-
form leaf node and may contain other content-related leaf nodes. Typically, a uni-
verse contains more than one branch graph. One view branch and any number of
content branches.
Besides serving as the root of a branch graph, the BranchGroup node has two spe-
cial properties: it alone may be inserted into a Locale object and it may be com-
piled. Java 3D treats uncompiled and compiled branch graphs identically, though
compiled branch graphs will typically render more efficiently.
We could not insert the scene graph created by our simple example (Listing 2-1)
into a Locale because it does not have a BranchGoup node for its root. Listing 2-2
shows a modified version of our first code example that creates a simple content
branch graph and the minimum of superstructure objects. Of special note, Locales
do not have children and they are not part of the scene graph. The method for insert-
ing a branch graph is addBranchGraph not addChild, the method for adding chil-
dren to all group nodes.
Listing 2-2 Code for Constructing a Scene Graph and Some Superstructure Objects
import com.sun.j3d.utils.universe.*;
triggered behaviors, process any identified input devices, and check for and gener-
ate appropriate collision events.
The order that a particular Java 3D implementation renders objects onto the display
is carefully not defined. One implementation might render the first Shape3D object
and then the second. Another might first render the second Shape3D node before it
renders the first one. Yet another implementation may render them in parallel.
2.2.1 Bounds
Bounds objects allow a programmer to define a volume in space. There are three
ways to specify this volume: as a box, a sphere, or a set of planes enclosing a space.
Bounds objects specify a volume in which particular operations apply. Environ-
mental effects such as lighting, fog, alternate appearance, and model clipping
planes use bounds objects to specify their region of influence. Any object that falls
within the space defined by the bounds object has the particular environmental
effect applied. The proper use of bounds objects can ensure that these environmen-
tal effects are applied only to those objects in a particular volume, such as a light
applying only to the objects within a single room.
Bounds objects are also used to specify a region of action. Behaviors and sounds
only execute or play if they are close enough to the viewer. The use of behavior and
sound bounds objects allows Java 3D to cull away those behaviors and sounds that
are too far away to affect the viewer (listener). By using bounds properly, a pro-
grammer can ensure that only the relevant behaviors and sounds execute or play.
Finally, bounds objects are used to specify a region of application for per-view
operations such as background, clip, and soundscape selection. For example, the
background node whose region of application is closest to the viewer is selected for
a given view.
2.2.2 Nodes
All scene graph nodes have an implicit location in space of (0, 0, 0). For objects
that exist in space, this implicit location provides a local coordinate system for that
object, a fixed reference point. Even abstract objects that may not seem to have a
well-defined location such as behaviors and ambient lights have this implicit loca-
tion. An object’s location provides an origin for its local coordinate system and just
as importantly an origin for any bounding volume information associated with that
object.
By setting the capability to write the transform, Java 3D will allow the following
code to execute:
myTrans.setTransform3D(myT3D);
The reason for the exception is that the TransformGroup is not enabled for reading
(ALLOW_TRANSFORM_READ).
It is important to ensure that all needed capabilities are set and that unnecessary
capabilities are not set. The process of compiling a branch graph examines the
capability bits and uses that information to reduce the amount of computation
needed to run a program.
all the geometry defined by its descendants. Spatial grouping allows for efficient
implementation of operations such as proximity detection, collision detection,
view frustum culling, and occlusion culling.
Virtual Universe
Hi-Res Locales
BG BG BG BranchGroup Nodes
Leaf Nodes
The most common node object, along the path from the root to the leaf, that
changes the graphics state is the TransformGroup object. The TransformGroup
object can change the position, orientation, and scale of the objects below it.
Most graphics state attributes are set by a Shape3D leaf node through its constit-
uent Appearance object, thus allowing parallel rendering. The Shape3D node
also has a constituent Geometry object that specifies its geometry—this permits
different shape objects to share common geometry without sharing material
attributes (or vice versa).
3.1.3 Rendering
The Java 3D renderer incorporates all graphics state changes made in a direct
path from a scene graph root to a leaf object in the drawing of that leaf object.
Java 3D provides this semantic for both retained and compiled-retained modes.
bility bits that are explicitly enabled (set) prior to the object being compiled or
made live are legal. The methods for setting and getting capability bits are
described next.
Constructors
The SceneGraphObject specifies one constructor.
public SceneGraphObject()
Methods
The following methods are available on all scene graph objects.
The first method returns a flag that indicates whether the node is part of a scene
graph that has been compiled. If so, only those capabilities explicitly allowed by
the object’s capability bits are allowed. The second method returns a flag that
indicates whether the node is part of a scene graph that has been attached to a
virtual universe via a high-resolution Locale object.
These three methods provide applications with the means for accessing and mod-
ifying the capability bits of a scene graph object. The bit positions of the capabil-
ity bits are defined as public static final constants on a per-object basis. Every
instance of every scene graph object has its own set of capability bits. An exam-
ple of a capability bit is the ALLOW_BOUNDS_WRITE bit in node objects. Only those
methods corresponding to capabilities that are enabled before the object is first
compiled or made live are subsequently allowed for that object. A Restricte-
dAccessException is thrown if an application calls setCapability or clearCa-
pability on live or compiled objects. Note that only a single bit may be set or
cleared per method invocation—bits may not be ORed together.
These methods access or modify the userData field associated with this scene
graph object. The userData field is a reference to an arbitrary object and may be
used to store any user-specific data associated with this scene graph object—it is
not used by the Java 3D API. If this object is cloned, the userData field is copied
to the newly cloned object.
Constants
Node object constants allow an application to individually enable runtime capa-
bilities. These capability bits are enforced only when the node is part of a live or
compiled scene graph.
These bits, when set using the setCapability method, specify that the node will
permit an application to invoke the getBounds and setBounds methods, respec-
tively. An application can choose to enable a particular set method but not the
associated get method, or vice versa. The application can choose to enable both
methods or, by default, leave the method(s) disabled.
These bits, when set using the setCapability method, specify that the node will
permit an application to invoke the getBoundsAutoCompute and set-
BoundsAutoCompute methods, respectively. An application can choose to enable
a particular set method but not the associated get method, or vice versa. The
application can choose to enable both methods or, by default, leave the method(s)
disabled.
These flags specify that this Node can have its pickability read or changed.
This flag specifies that this Node will be reported in the collision SceneGraph-
Path if a collision occurs. This capability is only specifiable for Group nodes; it
is ignored for Leaf nodes. The default for Group nodes is false. All interior nodes
not needed for uniqueness in a SceneGraphPath that don’t have this flag set to
true will not be reported in the SceneGraphPath.
These flags specify that this Node allows read or write access to its collidability
state.
This flag specifies that this node allows read access to its local-coordinates-to-
virtual-world-(Vworld)-coordinates transform.
Constructors
The Node object specifies the following constructor.
public Node()
This constructor constructs and initializes a Node object with default values. The
Node class provides an abstract class for all group and leaf nodes. It provides a
common framework for constructing a Java 3D scene graph, specifically, bound-
ing volumes. The default values are:
Parameters Default Value
pickable true
collidable true
Methods
The following methods are available on Node objects, subject to the capabilities
that are enabled for live or compiled nodes.
Retrieves the parent of this node, or null if this node has no parent. This method
is only valid during the construction of the scene graph. If this object is part of a
live or compiled scene graph, a RestrictedAccessException will be thrown.
These methods set and get the value that determines whether the node’s geomet-
ric bounds are computed automatically, in which case the bounds will be read-
only, or are set manually, in which case the value specified by setBounds will be
used. The default is automatic.
These methods set and retrieve the flag indicating whether this node can be
picked. A setting of false means that this node and its children are all unpick-
able.
The set method sets the collidable value. The get method returns the collidable
value. This value determines whether this node and its children, if a group node,
can be considered for collision purposes. If the value is false, neither this node
nor any children nodes will be traversed for collision purposes. The default value
is true. The collidable setting is the way that an application can perform collision
culling.
Constructors
The NodeComponent object specifies the following constructor.
public NodeComponent()
Methods
The following methods are available on NodeComponent objects.
Virtual Universe
Hi-Res Locale
BG
Physical Physical
Body Environment
lar Locale are all relative to the location of that Locale’s high-resolution coordi-
nates.
Virtual Universe
Hi-Res Locales
BG BG BG BranchGroup Nodes
Group Nodes
Leaf Nodes
A 256-bit fixed-point number also has the advantage of being able to directly
represent nearly any reasonable single-precision floating-point value exactly.
High-resolution coordinates in Java 3D are only used to embed more traditional
floating point coordinate systems within a much higher-resolution substrate. In
this way a visually seamless virtual universe of any conceivable size or scale can
be created, without worry about numerical accuracy.
between the location of their high-resolution Locale, and the view platform's
high-resolution Locale. (In the common case of the Locales being the same, no
translation is necessary.)
Constructors
The VirtualUniverse object has the following constructors.
public VirtualUniverse()
Methods
The VirtualUniverse object has the following methods.
The first method returns the Enumeration object of all Locales in this virtual uni-
verse. The numLocales method returns the number of Locales.
This method removes a Locale and its associates branch graphs from this uni-
verse. All branch graphs within the specified Locale are detached, regardless of
whether their ALLOW_DETACH capability bits are set. The Locale is then marked as
being dead: no branch graphs may subsequently be attached.
This method removes all Locales and their associates branch graphs from this
universe. All branch graphs within each Locale are detached, regardless of
whether their ALLOW_DETACH capability bits are set. Each Locale is then marked
These methods set and retrieve the priority of all Java 3D threads. The default
value is the priority of the thread that started Java 3D.
Constructors
The Locale object has the following constructors.
These three constructors create a new high-resolution Locale object in the speci-
fied VirtualUniverse. The first form constructs a Locale object located at
(0.0, 0.0, 0.0). The other two forms construct a Locale object using the specified
high-resolution coordinates. In the second form, the parameters x, y, and z are
arrays of eight 32-bit integers that specify the respective high-resolution coordi-
nate.
Methods
The Locale object has the following methods. For the Locale picking methods,
see Section 11.3.2, “BranchGroup Node and Locale Node Pick Methods.”
This method retrieves the virtual universe within which this Locale object is con-
tained.
The first three methods add, remove, and replace a branch graph in this Locale.
Adding a branch graph has the effect of making the branch graph “live.” The
fourth method retrieves the number of branch graphs in this Locale. The last
method retrieves an Enumeration object of all branch graphs.
Constructors
The HiResCoord object has the following constructors.
The first constructor generates the high-resolution coordinate point from three
integer arrays of length eight. The integer arrays specify the coordinate values
corresponding with their name. The second constructor creates a new high-reso-
lution coordinate point by cloning the high-resolution coordinates hc. The third
constructor creates new high-resolution coordinates with value (0.0, 0.0, 0.0).
Methods
These five methods modify the value of high-resolution coordinates this. The
first method resets all three coordinate values with the values specified by the
three integer arrays. The second method sets the value of this to that of high-
resolution coordinates hiRes. The third, fourth, and fifth methods reset the corre-
sponding coordinate of this.
These five methods retrieve the value of the high-resolution coordinates this.
The first method retrieves the high-resolution coordinates’ values and places
those values into the three integer arrays specified. All three arrays must have
length greater than or equal to eight. The second method updates the value of the
high-resolution coordinates hc to match the value of this. The third, fourth, and
fifth methods retrieve the coordinate value that corresponds to their name and
update the integer array specified, which must be of length eight or greater.
These methods scale a high-resolution coordinate point. The first method scales
h1 by the scalar value scale and places the scaled coordinates into this. The
second method scales this by the scalar value scale and places the scaled coor-
dinates back into this.
These two methods negate a high-resolution coordinate point. The first method
negates h1 and stores the result in this. The second method negates this and
stores its negated value back into this.
This method subtracts h1 from this and stores the resulting difference vector in
the double-precision floating-point vector v. Note that although the individual
high-resolution coordinate points cannot be represented accurately by double-
precision numbers, this difference vector between them can be accurately repre-
sented by doubles for many practical purposes, such as viewing.
The first method performs an arithmetic comparison between this and h1. It
returns true if the two high-resolution coordinate points are equal; otherwise, it
returns false. The second method returns true if the Object o1 is of type HiRes-
Coord and all of the data members of o1 are equal to the corresponding data
members in this HiResCoord.
G ROUP nodes are the glue elements used in constructing a scene graph. The
following subsections list the seven group nodes (see Figure 5-1) and their defi-
nitions. All group nodes can have a variable number of child node objects—
including other group nodes as well as leaf nodes. These children have an asso-
ciated index that allows operations to specify a particular child. However, unless
one of the special ordered group nodes is used, the Java 3D renderer can choose
to render a group node’s children in whatever order it wishes (including render-
ing the children in parallel).
SceneGraphObject
Node
Group
BranchGroup
OrderedGroup
DecalGroup
SharedGroup
Switch
TransformGroup
Constants
These flags, when enabled using the setCapability method, specify that this
Group node will allow the following methods, respectively:
• numChildren, getChild, getAllChildren
• setChild, insertChild, removeChild
• addChild, moveTo
These capability bits are enforced only when the node is part of a live or com-
piled scene graph.
These flags, when enabled using the setCapability method, specify that this
Group node will allow reading and writing of its collision bounds.
Constructors
public Group()
Methods
The Group node class defines the following methods.
The first method returns a count of the number of children. The second method
returns the child at the specified index.
The first method replaces the child at the specified index with a new child. The
second method inserts a new child before the child at the specified index. The
third method removes the child at the specified index. Note that if this Group
node is part of a live or compiled scene graph, only BranchGroup nodes may be
added to or removed from it—and only if the appropriate capability bits are set.
This method adds a new child as the last child in the group. Note that if this
Group node is part of a live or compiled scene graph, only BranchGroup nodes
may be added to it—and only if the appropriate capability bits are set.
This method moves the specified BranchGroup node from its old location in the
scene graph to the end of this group, in an atomic manner. Functionally, this
method is equivalent to the following lines:
branchGroup.detach();
this.addChild(branchGroup);
These methods set and retrieve the collision bounding object for a node.
The set method causes this Group node to be reported as the collision target
when collision is being used and this node or any of its children is in a collision.
The default is false. This method tries to set the capability bit
Node.ENABLE_COLLISION_REPORTING. The get method returns the collision tar-
get state.
For collision with USE_GEOMETRY set, the collision traverser will check the
geometry of all the Group node’s leaf descendants. For collision with
USE_BOUNDS set, the collision traverser will check the bounds at this Group
node. In both cases, if there is a collision, this Group node will be reported as the
colliding object in the SceneGraphPath.
Constants
The BranchGroup class adds the following new constant.
This flag, when enabled using the setCapability method, allows this Branch-
Group node to be detached from its parent group node. This capability flag is
enforced only when the node is part of a live or compiled scene graph.
Constructors
public BranchGroup()
Methods
The BranchGroup class defines the following methods.
Virtual Universe
Hi-Res Locale
BG BranchGroup Node
BG
Can be reparented or
removed at run time
This method compiles the scene graph rooted at this BranchGroup and creates
and caches a newly compiled scene graph.
Note: Even though arbitrary affine transformations are allowed, better perfor-
mance will result if all matrices within a branch graph are congruent—containing
only rotations, translation, and uniform scale.
The effects of transformations in the scene graph are cumulative. The concatena-
tion of the transformations of each TransformGroup in a direct path from the
Locale to a Leaf node defines a composite model transformation (CMT) that
takes points in that Leaf node’s local coordinates and transforms them into Vir-
tual World (Vworld) coordinates. This composite transformation is used to trans-
form points, normals, and distances into Vworld coordinates. Points are
transformed by the CMT. Normals are transformed by the inverse-transpose of
the CMT. Distances are transformed by the scale of the CMT. In the case of a
transformation containing a nonuniform scale or shear, the maximum scale value
in any direction is used. This ensures, for example, that a transformed bounding
sphere, which is specified as a point and a radius, continues to enclose all objects
that are also transformed using a nonuniform scale.
Constants
The TransformGroup class adds the following new flags.
These flags, when enabled using the setCapability method, allow this node’s
Transform3D to be read or written. They are only used when the node is part of
a live or compiled scene graph.
Constructors
public TransformGroup()
public TransformGroup(Transform3D t1)
These construct and initialize a new TransformGroup. The first form initializes
the node’s Transform3D to the identity transformation; the second form initial-
izes the node’s Transform3D to a copy of the specified transform.
Methods
The TransformGroup class defines the following methods.
These methods retrieve or set this node’s attached Transform3D object by copy-
ing the transform to or from the specified object.
The first method creates a new instance of the node. This method is called by
cloneTree to duplicate the current node. The second method copies all the node
information from the originalNode into the current node. This method is called
from the cloneNode method, which is in turn called by the cloneTree method.
For each NodeComponent object contained by the object being duplicated, the
NodeComponent’s duplicateOnCloneTree flag is used to determine whether the
NodeComponent should be duplicated in the new node or a reference to the cur-
rent node should be placed in the new node. This flag can be overridden by set-
ting the forceDuplicate parameter in the cloneTree method to true.
Constructors
public OrderedGroup()
index order and that they generate coplanar objects. Examples of this include
painted decals or text on surfaces and a checkerboard layered on top of a table.
The first child, at index 0, defines the surface on top of which all other children
are rendered. The geometry of this child must encompass all other children; oth-
erwise, incorrect rendering may result. The polygons contained within each of
the children must be facing the same way. If the polygons defined by the first
child are front facing, then all other surfaces should be front facing. In this case,
the polygons are rendered in order. The renderer can use knowledge of the copla-
nar nature of the surfaces to avoid Z-buffer collisions (for example, if the under-
lying implementation supports stenciling or polygon offset, then these techniques
may be employed). If the main surface is back facing, then all other surfaces
should be back facing and need not be rendered (even if back-face culling is dis-
abled).
Note that using the DecalGroup node does not guarantee that Z-buffer collisions
are avoided. An implementation of Java 3D may fall back to treating DecalGroup
node as an ordinary OrderedGroup node.
Constructors
public DecalGroup()
Constants
These flags, when enabled using the setCapability method, allow reading and
writing of the values that specify the child-selection criteria. They are only used
when the node is part of a live or compiled scene graph.
These values, when used in place of a non-negative integer index value, indicate
which children of the Switch node are selected for rendering. A value of
CHILD_NONE indicates that no children are rendered. A value of CHILD_ALL indi-
cates that all children are rendered, effectively making this Switch node operate
as an ordinary Group node. A value of CHILD_MASK indicates that the childMask
BitSet is used to select the children that are rendered.
Constructors
public Switch()
These constructors initialize a new Switch node using the specified parameters.
Methods
The Switch node class defines the following methods.
These methods access or modify the index of the child that the Switch object will
draw. The value may be a non-negative integer, indicating a specific child, or it
may be one of the following constants: CHILD_NONE, CHILD_ALL, or CHILD_MASK.
If the specified value is out of range, then no children are drawn.
These methods access or modify the mask used to select the children that the
Switch object will draw when the whichChild parameter is CHILD_MASK. This
parameter is ignored during rendering if the whichChild parameter is a value
other than CHILD_MASK.
This method returns the currently selected child. If whichChild is out of range,
or is set to CHILD_MASK, CHILD_ALL, or CHILD_NONE, then null is returned.
L EAF nodes define atomic entities such as geometry, lights, and sounds. The
leaf nodes and their associated meanings follow.
Constructors
public Leaf()
SceneGraphObject
Node
Leaf
AlternateAppearance
Background
Behavior
Predefined behaviors
BoundingLeaf
Clip
Fog
ExponentialFog
LinearFog
Light
AmbientLight
DirectionalLight
PointLight
SpotLight
Link
Morph
Shape3D
OrientedShape3D
Sound
BackgroundSound
PointSound
ConeSound
Soundscape
ViewPlatform
Figure 6-1 Leaf Node Hierarchy
The list of geometry objects must all be of the same equivalence class. That is,
the same basic type of primitive. For subclasses of GeometryArray, all point
objects are equivalent, all line objects are equivalent, and all polygon objects are
equivalent. For other subclasses of Geometry, only objects of the same subclass
are equivalent. The equivalence classes are as follows:
• GeometryArray (point): [Indexed]PointArray
• GeometryArray (line): [Indexed]{LineArray, LineStripArray}
• GeometryArray (polygon): [Indexed]{TriangleArray, TriangleStripArray,
TriangleFanArray, QuadArray}
• CompressedGeometry
• Raster
• Text3D
Constants
The Shape3D node object defines the following flags.
These flags, when enabled using the setCapability method, allow reading and
writing of the Geometry and Appearance component objects, the collision
bounds, and the appearance override enable respectively. These capability flags
are enforced only when the node is part of a live or compiled scene graph.
Constructors
The Shape3D node object defines the following constructors.
public Shape3D()
The first form constructs and initializes a new Shape3D object with the specified
geometry and appearance components. The second form uses the specified
geometry and a null appearance component. The list of geometry components is
initialized with the specified geometry component as the single element with an
index of 0. If the geometry component is null, no geometry is drawn. A null
appearance component specifies that default values are used for all appearance
attributes.
Methods
The Shape3D node object defines the following methods.
These methods access or modify the Geometry component object associated with
this Shape3D node. The first setGeometry method replaces the geometry com-
ponent at index 0 in this Shape3D node’s list of geometry components with the
specified geometry component. The second setGeometry method replaces the
geometry component at the specified index in this Shape3D node’s list of geom-
etry components with the specified geometry component. If there are existing
geometry components in the list (besides the one being replaced), the new geom-
etry component must be of the same equivalence class (point, line, polygon,
CompressedGeometry, Raster, Text3D) as the others. The first getGeometry
method retrieves the geometry component at index 0 from this Shape3D node’s
list of geometry components. The second getGeometry method retrieves the
geometry component at the specified index from this Shape3D node’s list of
geometry components.
These methods insert and remove the specified geometry component into or from
this Shape3D node’s list of geometry components. The insertGeometry method
inserts the specified geometry component into this Shape3D node’s list of geom-
etry components at the specified index. If there are existing geometry compo-
nents in the list, the new geometry component must be of the same equivalence
class (point, line, polygon, CompressedGeometry, Raster, Text3D) as the others.
The removeGeometry method removes the geometry component at the specified
index from this Shape3D node’s list of geometry components.
This method appends the specified geometry component to this Shape3D node’s
list of geometry components. If there are existing geometry components in the
list, the new geometry component must be of the same equivalence class (point,
line, polygon, CompressedGeometry, Raster, Text3D) as the others.
This method appends the specified geometry component to this Shape3D node’s
list of geometry components. If there are existing geometry components in the
list, the new geometry component must be of the same equivalence class (point,
line, polygon, CompressedGeometry, Raster, Text3D) as the others.
These methods set and retrieve the collision bounds for this node.
These two methods check if the geometry component of this shape node under
path intersects with the pickShape.
These methods set and retrieve the flag that indicates whether this node’s appear-
ance can be overridden. If the flag is true, this node’s appearance may be overrid-
den by an AlternateAppearance leaf node, regardless of the value of the ALLOW_
APPEARANCE_WRITE capability bit. The default value is false. See Section 6.15,
“AlternateAppearance Node.”
The OrientedShape3D leaf node is a Shape3D node that is oriented along a spec-
ified axis or about a specified point. It defines an alignment mode and a rotation
point or axis. This will cause the local +z axis of the object to point at the
viewer’s eye position. This is done regardless of the transforms above this
OrientedShape3D node in the scene graph.
Constants
The OrientedShape3D node object defines the following flags:
These flags, when enabled using the setCapability method, allow reading and
writing of the alignment mode, alignment axis, and rotation point information,
respectively. These capability flags are enforced only when the node is part of a
live or compiled scene graph.
Specifies that rotation should be about the specified point and that the children’s
Y-axis should match the view object’s Y-axis.
Constructors
The OrientedShape3D node specifies the following constructors.
Methods
These methods set and retrieve the alignment mode. The alignment mode is one
of ROTATE_ABOUT_AXIS or ROTATE_ABOUT_POINT.
These methods set and retrieve thte alignment axis. This is the ray about which
this OrientedShape3D rotates when the mode is ROTATE_ABOUT_AXIS.
These methods set and retrieve the rotation point. This is the point about which
the OrientedShape3D rotates when the mode is ROTATE_ABOUT_POINT.
Constants
The BoundingLeaf node object defines the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the bounding region
object.
Constructors
The BoundingLeaf node object defines the following constructors.
public BoundingLeaf()
Methods
These methods set and retrieve the BoundingLeaf node’s bounding region.
Constants
The Background node object defines the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the application region, the
image, the color, and the background geometry. These capability flags are
enforced only when the node is part of a live or compiled scene graph.
Constructors
The Background node object defines the following constructors.
public Background()
The first two forms construct a Background leaf node with the specified color.
The second form constructs a Background leaf node with the specified 2D image.
The final form constructs a Background leaf node with the specified geometry.
Methods
The Background node object defines the following methods.
These two methods access or modify the background image. If the image is not
null then it is used in place of the color.
These two methods access or modify the Background geometry. The setGeome-
try method sets the background geometry to the specified BranchGroup node. If
non-null, this background geometry is drawn on top of the background color or
image using a projection matrix that essentially puts the geometry at infinity. The
geometry should be pretessellated onto a unit sphere.
These two methods access or modify the Background node’s application bounds.
This bounds is used as the application region when the application bounding leaf
is set to null. The getApplicationBounds method returns a copy of the associ-
ated bounds.
These two methods access or modify the Background node’s application bound-
ing leaf. When set to a value other than null, this bounding leaf overrides the
application bounds object and is used as the application region.
Constants
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the application region and
the back distance. These capability flags are enforced only when the node is part
of a live or compiled scene graph.
Constructors
The Clip node object defines the following constructors.
public Clip()
Constructs a Clip leaf node with the rear clip plane at the specified distance, in
the local coordinate system, from the eye.
Methods
The Clip node object defines the following methods.
These methods access or modify the back clipping distances in the Clip node.
This distance specifies the back clipping plane in the local coordinate system of
the node. There are several considerations that need to be taken into account
when choosing values for the front and back clip distances. See Section 9.7.3,
“Projection and Clip Parameters,” for details.
These two methods access or modify the Clip node’s application bounds. This
bounds is used as the application region when the application bounding leaf is
set to null. The getApplicationBounds method returns a copy of the associated
bounds.
These two methods access or modify the Clip node’s application bounding leaf.
When set to a value other than null, this bounding leaf overrides the application
bounds object and is used as the application region.
Ax + By + Cz + D ≤ 0
where A, B, C, and D are the parameters that specify the plane.
The parameters are passed in the x, y, z, and w fields, respectively, of a Vector4d
object. The intersection of the set of half-spaces corresponding to the enabled
planes in this ModelClip node defines a region in which points are accepted.
Points in this acceptance region will be rendered (subject to view clipping and
other attributes). Points that are not in the acceptance region will not be rendered.
Constants
The ModelClip node object defines the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the influencing bounds
Version 1.2, March 2000 63
6.6 ModelClip Node LEAF NODE OBJECTS
and bounding leaf, planes, enable, and scope flags. These capability flags are
enforced only when the node is part of a live or compiled scene graph.
Constructors
The ModelClip node object defines the following constructors.
These constructors each construct a new ModelClip node. The first constructor
uses the specified planes and enable flags. The second constructor uses the spec-
ified parameters and uses defaults for those parameters not specified. Default val-
ues are described above.
Methods
The ModelClip node object defines the following methods.
These methods access or modify the ModelClip node’s influencing region. This
is used when the influencing bounding leaf is set to null.
These methods access or modify the ModelClip node’s influencing region. When
set to a value other than null, this overrides the influencing bounds object.
These methods access or modify the specified ModelClip node’s clipping planes.
The planes are an array of six model clipping planes. The set methods copy the
individual planes into this node. The get methods copy the individual planes into
the specified planes, which must be allocated by the caller.
These methods access or modify the specified ModelClip node’s enable flag. The
enables are an array of six booleans.
This method replaces the node at the specified index in this ModelClip node’s
list of scopes with the specified Group node. By default, ModelClip nodes are
scoped only by their influencing bounds. This allows them to be further scoped
by a list of nodes in the hierarchy.
This method retrieves the Group node at the specified index from this ModelClip
node’s list of scopes.
This method inserts the specified Group node into this ModelClip node’s list of
scopes at the specified index. By default, ModelClip nodes are scoped only by
their influencing bounds. This allows them to be further scoped by a list of nodes
in the hierarchy.
This method removes the node at the specified index from this ModelClip node’s
list of scopes. If this operation causes the list of scopes to become empty, this
Version 1.2, March 2000 65
6.7 Fog Node LEAF NODE OBJECTS
ModelClip will have universe scope; all nodes within the region of influence will
be affected by this ModelClip node.
This method appends the specified Group node to this ModelClip node’s list of
scopes. By default, ModelClip nodes are scoped only by their influencing
bounds. This allows them to be further scoped by a list of nodes in the hierarchy.
This method returns the number of nodes in this ModelClip node’s list of scopes.
If this number is 0, the list of scopes is empty and this ModelClip node has uni-
verse scope: all nodes within the region of influence are affected by this Model-
Clip node.
Constants
The Fog node object defines the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the region of influence,
read and write color, and read and write scope information. These capability
flags are enforced only when the node is part of a live or compiled scene graph.
Constructors
The Fog node object defines the following constructors.
public Fog()
These constructors each construct a new Fog node. The first constructor uses
default values for all parameters. The second constructor uses the specified
parameters and uses defaults for those parameters not specified. Default values
are described above.
Methods
The Fog node object defines the following methods.
These three methods access or modify the Fog node’s color. An application will
typically set this to the same value as the background color.
These methods access or modify the Fog node’s influencing bounds. This bounds
is used as the region of influence when the influencing bounding leaf is set to
null. The Fog node operates on all objects that intersect its region of influence.
The getInfluencingBounds method returns a copy of the associated bounds.
These methods access or modify the Fog node’s influencing bounding leaf.
When set to a value other than null, this overrides the influencing bounds object
and is used as the region of influence.
These methods access or modify the Fog node’s hierarchical scope. By default,
Fog nodes are scoped only by their regions of influence. These methods allow
them to be further scoped by a Group node in the hierarchy. The hierarchical
scoping of a Fog node cannot be accessed or modified if the node is part of a live
or compiled scene graph.
Constants
The ExponentialFog node object defines the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the density values. These
capability flags are enforced only when the node is part of a live or compiled
scene graph.
Constructors
The ExponentialFog node object defines the following constructors.
public ExponentialFog()
Each of these constructors creates a new ExponentialFog node using the speci-
fied parameters and use defaults for those parameters not specified.
Methods
The ExponentialFog node object defines the following methods.
These two methods access or modify the density in the ExponentialFog object.
Constants
The LinearFog node object defines the following flags.
public static final int ALLOW_DISTANCE_READ
public static final int ALLOW_DISTANCE_WRITE
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the distance values. These
capability flags are enforced only when the node is part of a live or compiled
scene graph.
Constructors
The LinearFog node object defines the following constructors.
public LinearFog()
These constructors each construct a new LinearFog node with the specified
parameters and use defaults for those parameters not specified.
Methods
The LinearFog node object defines the following methods.
These four methods access or modify the front and back distances in the Linear-
Fog object. The front distance is the distance at which the fog starts obscuring
objects. The back distance is the distance at which the fog fully obscures objects.
Objects drawn closer than the front fog distance are not affected by fog. Objects
drawn farther than the back fog distance are drawn entirely in the fog color.
Constants
The Light node object defines the following flags.
These flags, when enabled using the setCapability method, allow reading and
writing of the region of influence, the state, the color, and the scope information,
respectively. These capability flags are enforced only when the node is part of a
live or compiled scene graph.
Constructors
The Light node object defines the following constructors.
public Light()
These two constructors construct and initialize a light with the specified values.
Methods
The Light node object defines the following methods.
These methods access or modify the state of this light (that is, whether the light
is enabled).
These methods access or modify the Light node’s influencing bounds. This
bounds is used as the region of influence when the influencing bounding leaf is
set to null. The Light node operates on all objects that intersect its region of
influence. The getInfluencingBounds method returns a copy of the associated
bounds.
These methods access or modify the Light node’s influencing bounding leaf.
When set to a value other than null, this overrides the influencing bounds object
and is used as the region of influence.
These methods access or modify the Light node’s hierarchical scope. By default,
Light nodes are scoped only by their regions of influence bounds. These methods
allow them to be further scoped by a node in the hierarchy.
Constructors
The AmbientLight node defines the following constructors.
public AmbientLight()
public AmbientLight(Color3f color)
public AmbientLight(boolean lightOn, Color3f color)
The first constructor constructs and initializes a new AmbientLight node using
default parameters. The next two constructors construct and initialize a new
AmbientLight node using the specified parameters. The color parameter is the
color of the light source. The lightOn flag indicates whether this light is on or
off.
Constants
The DirectionalLight node object defines the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read or write the associated direction.
These capability flags are enforced only when the node is part of a live or com-
piled scene graph.
The DirectionalLight’s direction vector is defined in the local coordinate system
of the node.
Constructors
The DirectionalLight node object defines the following constructors.
public DirectionalLight()
These constructors construct and initialize a directional light with the parameters
provided.
Methods
The DirectionalLight node object defines the following methods.
The PointLight’s position is defined in the local coordinate system of the node.
Constants
The PointLight node object defines the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read position, write position, read atten-
uation parameters, and write attenuation parameters. These capability flags are
enforced only when the node is part of a live or compiled scene graph.
Constructors
The PointLight Node defines the following constructors.
public PointLight()
These constructors construct and initialize a point light with the specified param-
eters.
Methods
The PointLight node object defines the following methods.
These methods access or modify the point light’s current attenuation. The values
presented to the methods specify the coefficients of the attenuation polynomial,
with constant providing the constant term, linear providing the linear coeffi-
cient, and quadratic providing the quadratic coefficient.
Constants
The SpotLight node object defines the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write spread angle, concentra-
tion, and direction. These capability flags are enforced only when the node is
part of a live or compiled scene graph.
The SpotLight’s direction vector and spread angle are defined in the local coordi-
nate system of the node.
Constructors
The SpotLight node object defines the following constructors.
public SpotLight()
These construct and initialize a new spotlight with the parameters specified.
Methods
The SpotLight node object defines the following methods.
These methods access or modify the spread angle, in radians, of this spotlight.
ing if the sound is to continue playing “silently” even while it is inactive. When-
ever the listener is within the Sound node’s scheduling bounds, the sound is
potentially audible.
Constants
The Sound object contains the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the sound data, the initial
gain information, the loop information, the release flag, the continuous play flag,
the sound on/off switch, the scheduling region, the prioritization value, the dura-
tion information, and the sound playing information. These capability flags are
enforced only when the node is part of a live or compiled scene graph.
This constant defines a floating point value that denotes that no filter value is set.
Filters are described in Section 6.9.3, “ConeSound Node.”
This constant denotes that the sound’s duration could not be calculated. A fall-
back for getDuration of a non-cached sound.
Constructors
The Sound node object defines the following constructors.
public Sound()
Constructs and initializes a new Sound node object that includes the following
defaults for its fields:
Parameter Default Value
soundData null
initialGain 1.0
loop 0
release flag false
continuous flag false
on switch false
scheduling region null (cannot be scheduled)
priority 1.0
Constructs and initializes a new Sound node object using the provided data and
gain parameter values, and defaults for all other fields. This constructor implic-
itly loads the sound data associated with this node if the implementation uses
sound caching.
Constructs and initializes a new Sound node object using the provided parameter
values.
Methods
The Sound node object defines the following methods.
These methods provide a way to associate different types of audio data with a
Sound node. This data can be cached (buffered) or noncached (unbuffered or
streaming). If the AudioDevice has been attached to the PhysicalEnvironment,
the sound data is made ready to begin playing. Certain functionality cannot be
applied to true streaming sound data: sound duration is unknown, looping is dis-
abled, and the sound cannot be restarted. Furthermore, depending on the imple-
mentation of the AudioDevice used, streaming, non-cached data may not be fully
spatialized.
This gain is a scale factor that is applied to the sound data associated with this
sound source to increase or decrease its overall amplitude.
Data for nonstreaming sound (such as a sound sample) can contain two loop
points marking a section of the data that is to be looped a specific number of
times. Thus, sound data can be divided into three segments: the attack (before
the begin loop point), the sustain (between the begin and end loop points), and
the release (after the end loop point). If there are no loop begin and end points
defined as part of the sound data (say for Java Media Player types that do not
contain sound samples), then the begin loop point is set at the beginning of the
sound data, and the end loop point at the end of the sound data. If this is the case,
looping the sound means repeating the whole sound. However, these begin and
end loop points can be placed anywhere within the sound data, allowing a por-
tion in the middle of the sound to be looped.
A sound can be looped a specified number of times after it is activated and
before it is completed. The loop count value explicitly sets the number of times
the sound is looped. Any non-negative number is a valid value. A value of 0
denotes that the looped section is not repeated, but is played only once. A value
of –1 denotes that the loop is repeated indefinitely.
Changing the loop count of a sound after the sound has been started will not
dynamically affect the loop count currently used by the sound playing. The new
loop count will be used the next time the sound is enabled.
For some applications, it’s useful to turn a sound source “off” but to continue
“silently” playing the sound so that when it is turned back “on” the sound picks
up playing in the same location (over time) as it would have been if the sound
had never been disabled (turned off). Setting the continuous flag to true causes
the sound renderer to keep track of where (over time) the sound would be play-
ing even when the sound is disabled.
These two methods access or modify the Sound node’s scheduling bounds. This
bounds is used as the scheduling region when the scheduling bounding leaf is set
to null. A sound is scheduled for activation when its scheduling region inter-
sects the ViewPlatform’s activation volume. The getSchedulingBounds method
returns a copy of the associated bounds.
These two methods access or modify the Sound node’s scheduling bounding leaf.
When set to a value other than null, this bounding leaf overrides the scheduling
bounds object and is used as the scheduling region.
These methods access or modify the Sound node’s priority, which is used to rank
concurrently playing sounds in order of importance during playback. When more
sounds are started than the AudioDevice can handle, the Sound node with the
lowest priority ranking is deactivated. If a sound is deactivated (due to a sound
with a higher priority being started), it is automatically reactivated when
resources become available (for example, when a sound with a higher priority
finishes playing) or when the ordering of sound nodes is changed due to a change
in a Sound node’s priority.
Sounds with a lower priority than a sound that cannot be played due to a lack of
channels will be played. For example, assume we have eight channels available
for playing sounds. After ordering four sounds, we begin playing them in order,
checking if the number of channels required to play a given sound are actually
available before the sound is played. Furthermore, say the first sound needs three
channels to play, the second sound needs four channels, the third sound needs
three channels and the fourth sound needs only one channel. The first and sec-
Version 1.2, March 2000 81
6.9 Sound Node LEAF NODE OBJECTS
onds sounds can be started because they require seven of the eight available
channels. The third sound cannot be audibly started because it requires three
channels and only one is still available. Consequently, the third sound starts play-
ing “silently.” The fourth sound can and will be started since it only requires one
channel. The third sound will be made audible when three channels become
available (i.e., when the first or second sound is finished playing).
Sounds given the same priority are ordered randomly. If the application wants a
specific ordering it must assign unique priorities to each sound.
Methods to determine what audio output resources are required for playback of a
Sound node on a particular AudioDevice and to determine the currently available
audio output resources are described in Chapter 12, “Audio Devices.”
These two methods access or modify the playing state of this sound (that is,
whether the sound is enabled). When enabled, the sound source is started and
thus can potentially be heard, depending on its activation state, gain control
parameters, continuation state, and spatialization parameters. If the continuous
state is true and the sound is not active, enabling the sound starts the sound
silently “playing” so that when the sound is activated, the sound is (potentially)
heard from somewhere in the middle of the sound data. The activation state can
change from active to inactive any number of times without stopping or starting
the sound. To restart a sound at the beginning of its data, re-enable the sound by
calling setEnable with a value of true.
Setting the enable flag to true during construction will act as a request to start
the sound playing “as soon as it can” be started. This could be close to immedi-
ately in limited cases, but several conditions, detailed below, must be meet for a
sound to be ready to be played.
This method retrieves the sound’s “ready” status. If this sound is fully prepared
for playing (either audibly or silently) on all initialized audio devices, this
method returns true. Sound data associated with a Sound node, either during con-
struction (when the MediaContainer is passed into the constructor as a parame-
ter) or by calling setSoundData(), it can be prepared to begin playing only after
the following conditions are satisfied:
• The Sound node has non-null sound data associated with it
• The Sound node is live
A sound source will not be heard unless it is both enabled (turned on) and acti-
vated. If this sound is audibly playing on any initialized audio device, this
method will return a status of true.
When the sound finishes playing its sound data (including all loops), it is implic-
itly disabled.
This method returns the sound’s silent status. If this sound is silently playing on
any initialized audio device, this method returns true.
This method returns the length of time (in milliseconds) that the sound media
associated with the sound source could run (including the number of times its
loop section is repeated) if it plays to completion. If the sound media type is
streaming, or if the sound is looped indefinitely, then a value of –1 (implying
infinite length) is returned.
When a sound is started it could use more than one channel on the selected
AudioDevice it is to be played on. This method retrieves the number of channels
that are being used to render this sound on the audio device associated with the
VirtualUniverse’s primary view. The method returns 0 if sound is not playing.
Constructors
The BackgroundSound node specifies the following constructor.
public BackgroundSound()
The first constructor constructs a new BackgroundSound node using only the
provided parameter values for the sound data and initial gain. The second con-
structor uses the provided parameter values for the sound data, initial gain, the
number of times the loop is looped, a flag denoting whether the sound data is
played to the end, a flag denoting whether the sound plays silently when dis-
abled, whether sound is switched on or off, the sound activation region, and a
priority value denoting the playback priority ranking.
Constants
The PointSound object contains the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the position and the dis-
tance gain array. These capability flags are enforced only when the node is part
of a live or compiled scene graph.
Constructors
The PointSound node object defines the following constructors.
public PointSound()
Constructs a PointSound node object that includes the defaults for a Sound
object plus the following defaults for its own fields:
Parameter Default Value
Position vector (0.0, 0.0, 0.0)
initialGain null (no attenuation performed)
Both of these constructors construct a PointSound node object using only the
provided parameter values for sound data, sample gain, and position. The
remaining fields are set to the default values specified earlier. The first form uses
vectors as input for its position. The second form uses individual float parameters
for the elements of the position vector.
public PointSound(MediaContainer soundData, float initialGain,
int loopCount, boolean release, boolean continuous,
boolean enable, Bounds region, float priority,
Point3f position, Point2f distanceGain[])
public PointSound(MediaContainer soundData, float initialGain,
int loopCount, boolean release, boolean continuous,
boolean enable, Bounds region, float priority, float posX,
float posY, float posZ, Point2f distanceGain[])
public PointSound(MediaContainer soundData, float initialGain,
int loopCount, boolean release, boolean continuous,
boolean enable, Bounds region, float priority,
Point3f position, float attenuationDistance[],
float attenuationGain[])
These four constructors construct a PointSound node object using the provided
parameter values. The first and third forms use points as input for the position.
The second and fourth forms use individual float parameters for the elements of
the position. The first and second forms accept an array of Point2f for the dis-
tance attenuation values where each pair in the array contains a distance and a
gain scale factor. The third and fourth forms accept separate arrays for the com-
ponents of distance attenuation, namely, the distance and gain scale factors. See
the description for the setDistanceGain method, below, for details on how the
separate arrays are interpreted.
Methods
The PointSound node object defines the following methods.
These methods set and retrieve the position in 3D space from which the sound
radiates.
These methods set and retrieve the sound’s distance attenuation. If this is not set,
no distance gain attenuation is performed (equivalent to using a gain scale factor
of 1.0 for all distances). See Figure 6-2. Gain scale factors are associated with
distances from the listener to the sound source via an array of distance and gain
scale factor pairs. The gain scale factor applied to the sound source is determined
by finding the range of values distance[i] and distance[i+1] that includes
the current distance from the listener to the sound source, then linearly interpo-
lating the corresponding values gain[i] and gain[i+1] by the same amount.
If the distance from the listener to the sound source is less than the first distance
in the array, the first gain scale factor is applied to the sound source. This creates
a spherical region around the listener within which all sound gain is uniformly
scaled by the first gain in the array.
86 The Java 3D API Specification
LEAF NODE OBJECTS PointSound Node 6.9.2
If the distance from the listener to the sound source is greater than the last dis-
tance in the array, the last gain scale factor is applied to the sound source.
The first form of setDistanceGain takes these pairs of values as an array of
Point2f. The second form accepts two separate arrays for these values. The dis-
tance and gainScale arrays should be of the same length. If the gainScale
array length is greater than the distance array length, the gainScale array ele-
ments beyond the length of the distance array are ignored. If the gainScale
array is shorter than the distance array, the last gainScale array value is
repeated to fill an array of length equal to distance array.
There are two methods for getDistanceGain, one returning an array of points,
the other returning separate arrays for each attenuation component.
1.0
Scale factor
0.5
0.0
0 10 20 30
Distance (from listener
to sound source)
DistanceGain[0]
DistanceGain[1]
angularAttenuation[3]
angularAttenuation[0]
Attenuated Values
Constants
The ConeSound object contains the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the direction and the
angular attenuation array. These capability flags are enforced only when the node
is part of a live or compiled scene graph.
Constructors
The ConeSound node object defines the following constructors.
public ConeSound()
Constructs a ConeSound node object that includes the defaults for a PointSound
object plus the following defaults for its own fields:
Parameter Default Value
direction vector (0.0, 0.0, 1.0)
Both of these constructors construct a ConeSound node object using only the
provided parameter values for sound, overall initial gain, position, and direction.
The remaining fields are set to the default values listed earlier. The first form
uses points as input for its position and direction. The second form uses individ-
ual float parameters for the elements of the position and direction vectors.
Methods
The ConeSound node object defines the following methods.
These methods set and retrieve the ConeSound’s two distance attenuation arrays.
If these are not set, no distance gain attenuation is performed (equivalent to using
a distance gain of 1.0 for all distances). If only one distance attenuation array is
set, spherical attenuation is assumed (see Figure 6-4). If both a front and back
distance attenuation are set, elliptical attenuation regions are defined (see
Figure 6-5). Use the PointSound setDistanceGain method to set the front dis-
tance attenuation array separately from the back distance attenuation array.
A front distance attenuation array defines monotonically increasing distances
from the sound source origin along the position direction vector. A back distance
attenuation array (if given) defines monotonically increasing distances from the
sound source origin along the negative direction vector. The two arrays must be
of the same length. The backDistance[i] gain values must be less than or equal
to frontDistance[i] gain values.
Listener
Angular distances
Sound
Source Distances
Figure 6-4 ConeSound with a Single Distance Gain Attenuation Array
Listener
Gain scale factors are associated with distances from the listener to the sound
source via an array of distance and gain scale factor pairs (see Figure 6-2). The
gain scale factor applied to the sound source is the linear interpolated gain value
within the distance value range that includes the current distance from the lis-
tener to the sound source.
The getDistanceGainLength method (defined in PointSound) returns the length
of all distance gain attenuation arrays, including the back distance gain arrays.
Arrays passed into getBackDistanceGain methods should all be at least this size.
This value is the sound source’s direction vector. It is the axis from which angu-
lar distance is measured.
These methods set and retrieve the sound’s angular gain and filter attenuation
arrays. If these are not set, no angular gain attenuation or filtering is performed
(equivalent to using an angular gain scale factor of 1.0 and an angular filter of
NO_FILTER for all distances). This attenuation is defined as a triple of angular
distance, gain scale factor, and filter values. The distance is measured as the
angle in radians between the ConeSound’s direction vector and the vector from
the sound source position to the listener. Both the gain scale factor and filter
applied to the sound source are the linear interpolation of values within the dis-
tance value range that includes the angular distance from the sound source axis.
If the angular distance from the listener-sound-position vector and the sound’s
direction vector is less than the first distance in the array, the first gain scale fac-
tor and first filter are applied to the sound source. This creates a conical region
around the listener within which the sound is uniformly attenuated by the first
gain and the first filter in the array.
If the distance from the listener-sound-position vector and the sound’s direction
vector is greater than the last distance in the array, the last gain scale factor and
last filter are applied to the sound source.
Distance elements in this array of points are a monotonically increasing set of
floating point numbers measured from 0 to π radians. Gain scale factor elements
in this list of points can be any positive floating-point numbers. While for most
applications this list of gain scale factors will usually be monotonically decreas-
ing, they do not have to be. The filter (for now) is a single simple frequency cut-
off value.
In the first form of setAngularAttenuation, only the angular distance and
angular gain scale factor pairs are given. The filter values for these tuples are
The reverberation attributes for these two regions could be set to represent their
physical differences so that active sounds are rendered differently depending on
which region the listener is in.
Constants
The Soundscape node object defines the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the application region and
the aural attributes. These capability flags are enforced only when the node is
part of a live or compiled scene graph.
Constructors
The Soundscape node object defines the following constructors.
public Soundscape()
Constructs a Soundscape node object that includes the following defaults for its
elements:
LEAF NODE OBJECTS ViewPlatform Node 6.11
This method constructs a Soundscape node object using the specified application
region and aural attributes.
Methods
The Soundscape node object defines the following methods.
These two methods access or modify the Soundscape node’s application bounds.
This bounds is used as the application region when the application bounding leaf
is set to null. The aural attributes associated with this Soundscape are used to
render the active sounds when this application region intersects the ViewPlat-
form’s activation volume. The getApplicationBounds method returns a copy of
the associated bounds.
These two methods access or modify the Soundscape node’s application bound-
ing leaf. When set to a value other than null, this bounding leaf overrides the
application bounds object and is used as the application region.
These two methods access or modify the aural attributes of this Soundscape. Set-
ting it to null results in default attribute use.
Constants
The ViewPlatform node object defines the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the view attach policy.
These capability flags are enforced only when the node is part of a live or com-
piled scene graph.
Constructors
public ViewPlatform()
Constructs and initializes a new ViewPlatform leaf node object with default
parameters:
Parameter Default Value
view attach policy View.NOMINAL_HEAD
activation radius 62
Methods
The ViewPlatform node object defines the following methods:
The activation radius defines an activation volume surrounding the center of the
ViewPlatform. This activation volume intersects with the scheduling regions and
application regions of other leaf node objects to determine which of those objects
may affect rendering.
Different leaf objects interact with the ViewPlatform’s activation volume differ-
ently. The Background, Clip, and Soundscape leaf objects each define a set of
attributes and an application region in which those attributes are applied. If more
than one node of a given type (Background, Clip, or Soundscape) intersects the
ViewPlatform’s activation volume, the “most appropriate” node is selected.
Sound leaf objects begin playing their associated sounds when their scheduling
region intersects a ViewPlatform’s activation volume. Multiple sounds may be
active at the same time.
Behavior objects act somewhat differently. Those Behavior objects with schedul-
ing regions that intersect a ViewPlatform’s activation volume become candidates
for scheduling. Effectively, a ViewPlatform’s activation volume becomes an
additional qualifier on the scheduling of all Behavior objects. See Chapter 10,
“Behaviors and Interpolators,” for more details.
The view attach policy determines how Java 3D places the user’s virtual eye
point as a function of head position. See Section 9.4.3, “View Attach Policy,” for
details.
Constants
The Morph node specifies the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the node’s, appearance,
weights, collision Bounds, and appearance override enable components.
Constructors
The Morph node specifies the following constructors.
Constructs and initializes a new Morph leaf node with the specified array of
GeometryArray objects. Default values are used for all other parameters:
Parameter Default Value
appearance null
weights [1, 0, 0, 0, ...]
collision bounds null
appearance override enable false
A null appearance object specifies that default values are used for all appearance
attributes.
Constructs and initializes a new Morph leaf node with the specified array of
GeometryArray objects and the specified Appearance object. The length of the
geometryArrays parameter determines the number of weighted geometry arrays
in this Morph node. If geometryArrays is null, then a NullPointerException
is thrown. If the Appearance component is null, then default values are used for
all appearance attributes.
Methods
The Morph node specifies the following methods.
This method sets the array of GeometryArray objects in the Morph node. Each
GeometryArray component specifies colors, normals, and texture coordinates.
The length of the geometryArrays parameter must be equal to the length of the
array with which this Morph node was created; otherwise, an Illegal-
ArgumentException is thrown.
This method retrieves a single geometry array from the Morph node. The index
parameter specifies which array is returned.
These methods set and retrieve the Appearance component of this Morph node.
The Appearance component specifies material, texture, texture environment,
transparency, or other rendering parameters. Setting it to null results in default
attribute use.
These methods set and retrieve the morph weight vector component of this
Morph node. The Morph node “weights” the corresponding GeometryArray by
the amount specified. The length of the weights parameter must be equal to the
length of the array with which this Morph node was created; otherwise, an Ille-
galArgumentException is thrown.
These methods set and retrieve the collision bounding object of this node.
These methods check if the geometry component of this morph node under path
intersects with the pickShape.
These methods set and retrieve the flag that indicates whether this node’s appear-
ance can be overridden. If the flag is true, this node’s appearance may be overrid-
den by an AlternateAppearance leaf node, regardless of the value of the ALLOW_
APPEARANCE_WRITE capability bit. The default value is false. See Section 6.15,
“AlternateAppearance Node.”
Constants
The AlternateAppearance node specifies the following flags:
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the node’s influencing
bounds and bounds leaf information, appearance information, and scope infor-
mation components.
Constructors
The AlternateAppearance node specifies the following constructors.
Methods
The AlternateAppearance node specifies the following methods.
These methods set and retrieve the appearance of this AlternateAppearance node.
This appearance overrides the appearance in those Shape3D and Morph nodes
affected by this AlternateAppearance node.
The first method replaces the node at the specified index in this AlternateAppear-
ance node’s list of scopes with the specified Group node. The second method
retrieves the Group node at the specified index from this AlternateAppearance
node’s list of scopes. By default, AlternateAppearance nodes are scoped only by
their influencing bounds. This allows them to be further scoped by a list of nodes
in the hierarchy.
The first method inserts the specified Group node into this AlternateAppearance
node’s list of scopes at the specified index. The second method removes the node
at the specified index from this AlternateAppearance node’s list of scopes. If this
operation causes the list of scopes to become empty, this AlternateAppearance
will have universe scope; all nodes within the region of influence will be affected
by this AlternateAppearance node. By default, AlternateAppearance nodes are
scoped only by their influencing bounds. This allows them to be further scoped
by a list of nodes in the hierarchy.
This method returns the number of nodes in this AlternateAppearance node’s list
of scopes. If this number is 0, the list of scopes is empty and this AlternateAp-
pearance node has universe scope; all nodes within the region of influence are
affected by this AlternateAppearance node.
JAVA 3D provides application programmers with two different means for reus-
ing scene graphs. First, multiple scene graphs can share a common subgraph.
Second, the node hierarchy of a common subgraph can be cloned, while still
sharing large component objects such as geometry and texture objects. In the first
case, changes in the shared subgraph affect all scene graphs that refer to the
shared subgraph. In the second case, each instance is unique—a change in one
instance does not affect any other instance.
Virtual Universe
Hi-Res Locale
BG BG BranchGroup Nodes
L Link Nodes
L
SG SharedGroup Node
• AlternateAppearance
• Background
• BoundingLeaf
• Behavior
• Clip
• Fog
• ModelClip
• Soundscape
• ViewPlatform
Constructors
public SharedGroup()
Methods
The SharedGroup node defines the following methods.
This method compiles the source SharedGroup associated with this object and
creates and caches a newly compiled scene graph.
Constants
The Link node object defines two flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the SharedGroup node
pointed to by this Link node. These capability flags are enforced only when the
node is part of a live or compiled scene graph.
Constructors
The Link node object defines two constructors.
public Link()
public Link(SharedGroup sharedGroup)
The first form constructs a Link node object that does not yet point to a
SharedGroup node. The second form constructs a Link node object that points to
the specified SharedGroup node.
Methods
The Link node object defines two methods.
These methods access and modify the SharedGroup node associated with this
Link leaf node.
Methods
These methods start the cloning of the subgraph. The optional forceDuplicate
parameter, when set to true, causes leaf NodeComponent objects to ignore their
duplicateOnCloneTree value and always be duplicated (see Section 7.2.1,
“References to Node Component Objects”). The allowDanglingReferences
parameter, when set to true, will permit the cloning of a subgraph even when a
dangling reference is generated (see Section 7.2.3, “Dangling References”). Set-
ting forceDuplicate and allowDanglingReferences to false is the equivalent
of calling cloneTree without any parameters. This will result in NodeCompo-
nent objects being either duplicated or referenced in the cloned node, based on
their duplicateOnCloneTree value. A DanglingReferenceException will be
thrown if a dangling reference is encountered.
When the cloneTree method is called on a node, that node is duplicated along
with its entire internal state. If the node is a Group node, cloneTree is then
called on each of the node’s children.
The cloneTree method cannot be called on a live or compiled scene graph.
G G
Group Nodes
cloneTree
Leaf Nodes
Lf Lf Lf Lf Lf Lf
NodeComponents
Methods
These methods set a flag that controls whether a NodeComponent object is dupli-
cated or referenced on a call to cloneTree. By default this flag is false, mean-
ing that the NodeComponent object will not be duplicated on a call to
cloneTree—newly created leaf nodes will refer to the original NodeComponent
object instead.
If the cloneTree method is called with the forceDuplicate parameter set to
true, the duplicateOnCloneTree flag is ignored and the entire scene graph is
duplicated.
G G
N1 N2
cloneTree
Lf Lf Lf1 Lf Lf Lf2
A leaf node that needs to update referenced nodes upon being duplicated by a
call to cloneTree must implement the updateNodeReferences method. By
using this method, the cloned leaf node can determine if any nodes referenced by
it have been duplicated and, if so, update the appropriate references to their
cloned counterparts.
Suppose, for instance, that the leaf node Lf1 in Figure 7-3 implemented the
updateNodeReferences method. Once all nodes had been duplicated, the clon-
eTree method would then call each cloned leaf’s node updateNodeReferences
method. When cloned leaf node Lf2’s method was called, Lf2 could ask if the
node N1 had been duplicated during the cloneTree operation. If the node had
been duplicated, leaf Lf2 could then update its internal state with the cloned
node, N2 (see Figure 7-4).
G G
N1 cloneTree N2
Lf Lf Lf1 Lf Lf Lf2
Methods
This SceneGraphObject node method is called by the cloneTree method after all
nodes in the subgraph have been cloned. The user can query the NodeReference-
Table object (see Section 7.2.5, “NodeReferenceTable Object”) to determine if
any nodes that the SceneGraphObject node references have been duplicated by
the cloneTree call and, if so, what the corresponding node is in the new sub-
graph. If a user extends a predefined Java 3D object and adds a reference to
another node, this method must be defined in order to ensure proper operation of
the cloneTree method. The first statement in the user’s updateNodeReferences
method must be super.updateNodeReferences(referenceTable). For pre-
defined Java 3D nodes, this method will be implemented automatically.
The NodeReferenceTable object is passed to the updateNodeReferences method
and allows references from the old subgraph to be translated into references in
the cloned subgraph. The translation is performed by the getNew-
NodeReference method.
This method takes a reference to the node in the original subgraph as an input
parameter and returns a reference to the equivalent node in the just-cloned sub-
graph. If the equivalent node in the cloned subgraph does not exist, either an
exception is thrown or a reference to the original node is returned (see
Section 7.2.3, “Dangling References”).
G
cloneTree
Lf
NodeComponent cloneNodeComponent();
void duplicateNodeComponent(NodeComponent nc,
boolean forceDuplicate); New in 1.2
Constructors
Methods
This method takes a reference to the node in the original subgraph as an input
parameter and returns a reference to the equivalent node in the just-cloned sub-
graph. If the equivalent node in the cloned subgraph does not exist, either an
exception is thrown or a reference to the original node is returned (see
Section 7.2.3, “Dangling References”).
RotationBehavior r =
new RotationBehavior(objectTransform, numFrames);
r.duplicateNode(this, forceDuplicate);
return r;
}
// duplicateNode is needed to duplicate all super class
// data as well as all user data.
public void duplicateNode(Node originalNode, boolean
forceDuplicate) {
super.duplicateNode(originalNode, forceDuplicate);
// Nothing to do here - all unique data was handled
// in the constructor in the cloneNode routine.
}
Constants
The Appearance component object defines the following flags.
SceneGraphObject
NodeComponent
Alpha
Appearance
AuralAttributes
ColoringAttributes
LineAttributes
PointAttributes
PolygonAttributes
RenderingAttributes
TextureAttributes
TransparencyAttributes
Material
MediaContainer
TextureUnitState
TexCoordGeneration
Texture
Texture2D
Texture3D
ImageComponent
ImageComponent2D
ImageComponent3D
DepthComponent
DepthComponentFloat
DepthComponentInt
DepthComponentNative
Bounds
BoundingBox
BoundingPolytope
BoundingSphere
Transform3D
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read and write the specified component object refer-
ence (material, texture, texture coordinate generation, and so forth). These
capability flags are enforced only when the object is part of a live or compiled
scene graph.
Constructors
The Appearance object has the following constructor.
public Appearance()
Constructs and initializes an Appearance object using defaults for all state vari-
ables. All component object references are initialized to null.
Methods
The Appearance object has the following methods.
The Material object specifies the desired material properties used for lighting.
Setting it to null disables lighting.
The Texture object specifies the desired texture map and texture parameters. Set-
ting it to null disables texture mapping. Applications must not set individual
texture component objects (texture, textureAttributes, or texCoordGeneration)
and the texture unit state array in the same Appearance object. Doing so will
result in an exception being thrown.
These methods set and retrieve the TextureAttributes object. Setting it to null
results in default attribute use. Applications must not set individual texture com-
ponent objects (texture, textureAttributes, or texCoordGeneration) and the tex-
ture unit state array in the same Appearance object. Doing so will result in an
exception being thrown.
These methods set and retrieve the ColoringAttributes object. Setting it to null
results in default attribute use.
These methods set and retrieve the RenderingAttributes object. Setting it to null
results in default attribute use.
These methods set and retrieve the PolygonAttributes object. Setting it to null
results in default attribute use.
These methods set and retrieve the LineAttributes object. Setting it to null
results in default attribute use.
These methods set and retrieve the PointAttributes object. Setting it to null
results in default attribute use.
These methods set and retrieve the TexCoordGeneration object. Setting it to null
disables texture coordinate generation.
These methods set and retrieve the texture-unit state for this Appearance object
(see Section 8.1.15, “TextureUnitState Object”). The first method sets the texture
unit state array to the specified array. A shallow copy of the array of references
to the TextureUnitState objects is made. If the specified array is null or if the
length of the array is 0, multi-texture is disabled. Within the array, a null Texture-
UnitState element disables the corresponding texture unit. The second method
sets the texture unit state array object at the specified index within the texture
unit state array to the specified object. If the specified object is null, the corre-
sponding texture unit is disabled. The index must be within the range [0, stateAr-
ray.length–1]. Applications must not set individual texture component objects
(texture, textureAttributes, or texCoordGeneration) and the texture unit state
array in the same Appearance object. Doing so will result in an exception being
thrown.
This method retrieves the length of the texture unit state array from this Appear-
ance object. The length of this array specifies the maximum number of texture
units that will be used by this appearance object. If the array is null, a count of 0
is returned.
Constants
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write its color component and
shade model component information.
Constructors
public ColoringAttributes()
Methods
These methods set and retrieve the intrinsic color of this ColoringAttributes com-
ponent object. This color is only used for unlit geometry. If lighting is enabled,
the material colors are used in the lighting equation to produce the final color.
When vertex colors are present in unlit geometry, those vertex colors are used in
place of this ColoringAttributes color unless the vertex colors are ignored.
These methods set and retrieve the shade model for this ColoringAttributes com-
ponent object. The shade model is one of the following:
• FASTEST: Uses the fastest available method for shading.
• NICEST: Uses the nicest (highest quality) available method for shading.
• SHADE_FLAT: Does not interpolate color across the primitive.
• SHADE_GOURAUD: Smoothly interpolates the color at each vertex
across the primitive.
Constants
The LineAttributes object specifies the following variables.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read and write its individual component field infor-
mation.
Draws a dashed line. Ideally, this will be drawn with a repeating pattern of eight
pixels on and eight pixels off.
Draws a dotted line. Ideally, this will be drawn with a repeating pattern of one
pixel on and seven pixels off.
Draws a dashed-dotted line. Ideally, this will be drawn with a repeating pattern
of seven pixels on, four pixels off, one pixel on, and four pixels off.
Draws lines with a user-defined line pattern. The line pattern is specified with a
pattern mask and a scale factor.
Constructors
public LineAttributes()
Methods
These methods respectively set and retrieve the line width, in pixels, for this Lin-
eAttributes component object.
These methods respectively set and retrieve the line pattern for this LineAt-
tributes component object. The linePattern value describes the line pattern to
be used, which is one of the following: PATTERN_SOLID, PATTERN_DASH,
PATTERN_DOT, or PATTERN_DASH_DOT.
The set method enables or disables line antialiasing for this LineAttributes com-
ponent object. The get method retrieves the state of the line antialiasing flag.
The flag is true if line antialiasing is enabled, false if line antialiasing is dis-
abled.
These methods respectively set and retrieve the line pattern mask. The line pat-
tern mask is used when the linePattern attribute is set to PATTERN_USER_
DEFINED.
In this mode, the pattern is specified using a 16-bit mask that specifies on and off
segments. Bit 0 in the pattern mask corresponds to the first pixel of the line or
line strip primitive. A value of 1 for a bit in the pattern mask indicates that the
corresponding pixel is drawn, while a value of 0 indicates that the corresponding
pixel is not drawn. After all 16 bits in the pattern are used, the pattern is
repeated. For example, a mask of 0x00ff defines a dashed line with a repeating
pattern of eight pixels on followed by eight pixels off. A value of 0x0101 defines
a dotted line with a repeating pattern of one pixel on and seven pixels off.
The pattern continues around individual line segments of a line strip primitive. It
is restarted at the beginning of each new line strip. For line array primitives, the
pattern is restarted at the beginning of each line.
These methods respectively set and retrieve the line pattern scale factor. The line
pattern scale factor is used in conjunction with the patternMask when the line-
Pattern attribute is set to PATTERN_USER_DEFINED. The pattern is multiplied by
the scale factor such that each bit in the pattern mask corresponds to that many
consecutive pixels. For example, a scale factor of 3 applied to a pattern mask of
0x001f would produce a repeating pattern of 15 pixels on followed by 33 pixels
off. The valid range for this attribute is [1,15]. Values outside this range are
clamped.
Constants
The PointAttributes object specifies the following variables.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read and write its individual component field infor-
mation.
Constructors
public PointAttributes()
Methods
These methods set and retrieve the point size, in pixels, for this Appearance com-
ponent object.
The set method enables or disables point antialiasing for this PointAttributes
component object. The get method retrieves the state of the point antialiasing
flag. The flag is true if point antialiasing is enabled, false if point antialiasing
is disabled.
Constants
The PolygonAttributes object specifies the following variables.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read and write its individual component field infor-
mation.
Constructors
public PolygonAttributes()
These constructors create a new PolygonAttributes object with the specified val-
ues.
Methods
These methods set and retrieve the face culling flag for this PolygonAttributes
component object. The face culling flag is one of the following:
These methods set and retrieve the back-face normal flip flag. This flag indicates
whether vertex normals of back-facing polygons should be flipped (negated)
prior to lighting. When this flag is set to true and back-face culling is disabled,
polygons are rendered as if the polygon had two sides with opposing normals.
This feature is disabled by default.
These methods set and retrieve the polygon rasterization mode for this Appear-
ance component object. The polygon rasterization mode is one of the following:
• POLYGON_POINT: Renders polygonal primitives as points drawn at the
vertices of the polygon.
• POLYGON_LINE: Renders polygonal primitives as lines drawn between
consecutive vertices of the polygon.
• POLYGON_FILL: Renders polygonal primitives by filling the interior of
the polygon.
These methods set and retrieve the constant polygon offset. This screen-space
offset is added to the final, device-coordinate Z value of polygon primitives.
These methods set and retrieve the the polygon offset factor. This factor is multi-
plied by the slope of the polygon and then added to the final device coordinate Z
value of polygon primitives.
Constants
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write its individual test value
and function information.
Constructors
public RenderingAttributes()
Methods
These methods set and retrieve the depth buffer enable flag for this RenderingAt-
tributes component object. The flag is true if the depth buffer mode is enabled,
false if disabled.
These methods set and retrieve the depth buffer write enable flag for this Render-
ingAttributes component object. The flag is true if the depth buffer mode is
writable, false if the depth buffer is read-only.
These methods set and retrieve the alpha test value used by the alpha test func-
tion. This value is compared to the alpha value of each rendered pixel.
These methods set and retrieve the alpha test function. The alpha test function is
one of the following:
• ALWAYS: Indicates pixels are always drawn irrespective of the alpha val-
ue. This effectively disables alpha testing.
• NEVER: Indicates pixels are never drawn irrespective of the alpha value.
• EQUAL: Indicates pixels are drawn if the pixel alpha value is equal to the
alpha test value.
• NOT_EQUAL: Indicates pixels are drawn if the pixel alpha value is not
equal to the alpha test value.
• LESS: Indicates pixels are drawn if the pixel alpha value is less than the
alpha test value.
• LESS_OR_EQUAL: Indicates pixels are drawn if the pixel alpha value is
less than or equal to the alpha test value.
• GREATER: Indicates pixels are drawn if the pixel alpha value is greater
than the alpha test value.
• GREATER_OR_EQUAL: Indicates pixels are drawn if the pixel alpha val-
ue is greater than or equal to the alpha test value.
132 The Java 3D API Specification
NODE COMPONENT OBJECTS TextureAttributes Object 8.1.8
These methods set and retrieve the visibility flag for this RenderingAttributes
component object. Invisible objects are not rendered (subject to the visibility pol-
icy for the current view), but they can be picked or collided with.
These methods set and retrieve the flag that indicates whether vertex colors are
ignored for this RenderingAttributes object. If ignoreVertexColors is false,
per-vertex colors are used, when present in the associated Geometry objects, tak-
ing precedence over the ColoringAttributes color and Material diffuse color. If
ignoreVertexColors is true, per-vertex colors are ignored. In this case, if light-
ing is enabled, the Material diffuse color will be used as the object color. If light-
ing is disabled, the ColoringAttributes color will be used. The default value is
false.
These methods set and retrieve the rasterOp enable flag for this RenderingAt-
tributes component object. When set to true, this enables logical raster operations
as specified by the setRasterOp method. Enabling raster operations effectively
disables alpha blending, which is used for transparency and antialiasing. Raster
operations, especially XOR mode, are primarily useful when rendering to the
front buffer in immediate mode. Most applications will not wish to enable this
mode.
These methods set and retrieve the raster operation function for this Renderin-
gAttributes component object. The rasterOp is one of the following:
• ROP_COPY: DST = SRC
• ROP_XOR: DST = SRC ^ DST
Constants
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write its individual component
field information.
Constructors
public TextureAttributes()
Methods
These methods set and retrieve the texture mode parameter for this Texture-
Attributes component object. The texture mode is one of the following:
• MODULATE: Modulates the object color with the texture color.
• DECAL: Applies the texture color to the object as a decal.
• BLEND: Blends the texture blend color with the object color.
• REPLACE: Replaces the object color with the texture color.
These methods set and retrieve the texture blend color for this TextureAttributes
component object. The texture blend color is used when the texture mode param-
eter is BLEND.
This method sets the texture color table from the specified table. The individual
integer array elements are copied. The array is indexed first by color component
(r, g, b, and a, respectively) and then by color value; table.length defines the
number of color components and table[0].length defines the texture color
table size. If the table is non-null, the number of color components must either be
three, for rgb data, or four, for rgba data. The size of each array for each color
component must be the same and must be a power of 2. If table is null or if the
texture color table size is 0, the texture color table is disabled. If the texture color
table size is greater than the device-dependent maximum texture color table size
for a particular Canvas3D, the texture color table is ignored for that canvas.
When enabled, the texture color table is applied after the texture filtering opera-
tion and before texture application. Each of the r, g, b, and a components are
clamped to the range [0,1], multiplied by textureColorTableSize–1, and
rounded to the nearest integer. The resulting value for each component is then
used as an index into the respective table for that component. If the texture color
table contains three components, alpha is passed through unmodified.
public void getTextureColorTable(int[][] table) New in 1.2
This method retrieves the texture color table and copies it into the specified array.
If the current texture color table is null, no values are copied. The array must be
allocated by the caller and must be large enough to hold the entire table (that is,
int[numTextureColorTableComponents][textureColorTableSize]).
This method retrieves the number of color components in the current texture
color table. A value of 0 is returned if the texture color table is null.
This method retrieves the size of the current texture color table. A value of 0 is
returned if the texture color table is null.
These methods set and retrieve the texture transform object used to transform
texture coordinates. A copy of the specified Transform3D object is stored in this
TextureAttributes object.
These methods set and retrieve the perspective correction mode to be used for
color and texture coordinate interpolation. The perspective correction mode is
one of the following:
• NICEST: Uses the nicest (highest quality) available method for texture
mapping perspective correction.
• FASTEST: Uses the fastest available method for texture mapping perspec-
tive correction.
Constants
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write its individual component
field information.
Constructors
public TransparencyAttributes()
Methods
These methods set and retrieve the transparency mode for this Appearance com-
ponent object. The transparency mode is one of the following:
• FASTEST: Uses the fastest available method for transparency.
• NICEST: Uses the nicest available method for transparency.
• SCREEN_DOOR: Uses screen-door transparency. This is done using an
on/off stipple pattern in which the percentage of transparent pixels is ap-
proximately equal to the value specified by the transparency parameter.
• BLENDED: Uses alpha blended transparency. The blend equation is spec-
ified by the srcBlendFunction and dstBlendFunction attributes. The
default equation is: alpha*src + (1-alpha)*dst, where alpha is 1 –
transparency.
• NONE: No transparency; opaque object.
These methods set and retrieve this Appearance object’s transparency value. The
transparency value is in the range [0.0, 1.0], with 0.0 being fully opaque and 1.0
being fully transparent.
These methods set and retrieve the source blend function used in blended trans-
parency and antialiasing operations. The source function specifies the factor that
is multiplied by the source color. This value is added to the product of the desti-
nation factor and the destination color. The default source blend function is
BLEND_SRC_ALPHA. The source blend function is one of the following:
These methods set and retrieve the destination blend function used in blended
transparency and antialiasing operations. The destination function specifies the
factor that is multiplied by the destination color. This value is added to the prod-
uct of the source factor and the source color. The default destination blend func-
tion is BLEND_ONE_MINUS_SRC_ALPHA.
Constants
The Material object defines two flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write its individual component
field information.
Constructors
The Material object has the following constructors.
public Material()
Constructs and initializes a Material object using default values for all attributes.
The default values are as follows:
Constructs and initializes a new Material object using the specified parameters.
The ambient color, emissive color, diffuse color, specular color, and shininess
parameters are specified.
Methods
The Material object has the following methods.
This parameter specifies this material’s ambient color, that is, how much ambient
light is reflected by the material’s surface.
This parameter specifies the color of light, if any, that the material emits. This
color is added to the color produced by applying the lighting equation.
This parameter specifies the color of the material when illuminated by a light
source. In addition to the diffuse color (red, green, and blue), the alpha value is
used to specify transparency such that transparency = (1 – alpha). When vertex
colors are present in geometry that is being lit, those vertex colors are used in
place of this diffuse color in the lighting equation unless the vertex colors are
ignored.
These methods set and retrieve the current state of the lighting enable flag (true
or false) for this Appearance component object.
This method returns a string representation of this Material’s values. If the scene
graph is live, only those values with their capability bit set will be displayed.
Constants
The Texture object defines the following flags:
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read, and in some cases write, its individual compo-
nent field information. The size information includes width, height, and number
of mipmap levels.
Constructors
The Texture object has the following constructor.
public Texture()
This constructor is not very useful as the default width and height are 0. The
other default values are as follows:
The format is the data of textures saved in this object. The format can be one of
the following:
• INTENSITY: Specifies Texture contains only intensity values.
• LUMINANCE: Specifies Texture contains only luminance values.
• ALPHA: Specifies Texture contains only alpha values.
• LUMINANCE_ALPHA: Specifies Texture contains luminance and alpha
values.
• RGB: Specifies Texture contains red, green, and blue color values.
• RGBA: Specifies Texture contains red, green, and blue color values, and
an alpha value.
Methods
The Texture object has the following methods.
These parameters specify the boundary mode for the S and T coordinates in this
Texture object. The boundary mode is as follows:
• CLAMP: Clamps texture coordinates to be in the range [0, 1]. A constant
boundary color is used for U,V values that fall outside this range.
• WRAP: Repeats the texture by wrapping texture coordinates that are out-
side the range [0, 1]. Only the fractional portion of the texture coordinates
is used; the integer portion is discarded.
This parameter specifies the minification filter function. This function is used
when the pixel being rendered maps to an area greater than one texel. The mini-
fication filter is one of the following:
• FASTEST: Uses the fastest available method for processing geometry.
• NICEST: Uses the nicest available method for processing geometry.
• BASE_LEVEL_POINT: Selects the nearest texel in the level 0 texture
map.
• BASE_LEVEL_LINEAR: Performs a bilinear interpolation on the four
nearest texels in the level 0 texture map.
• MULTI_LEVEL_POINT: Selects the nearest texel in the nearest mipmap.
• MULTI_LEVEL_LINEAR: Performs trilinear interpolation of texels be-
tween four texels each from the two nearest mipmap levels.
This parameter specifies the magnification filter function. This function is used
when the pixel being rendered maps to an area less than or equal to one texel.
The value is one of the following:
• FASTEST: Uses the fastest available method for processing geometry.
• NICEST: Uses the nicest available method for processing geometry.
• BASE_LEVEL_POINT: Selects the nearest texel in the level 0 texture
map.
• BASE_LEVEL_LINEAR: Performs a bilinear interpolation on the four
nearest texels in the level 0 texture map.
These methods set and retrieve the image for a specified mipmap level. Level 0
is the base level.
These methods set and retrieve the array of images for all mipmap levels.
This parameter specifies the texture boundary color for this Texture object. The
texture boundary color is used when boundaryModeS or boundaryModeT is set to
CLAMP. The magnification filter affects the boundary color as follows: For BASE_
LEVEL_POINT, the boundary color is ignored since the filter size is 1 and the
border is unused. For BASE_LEVEL_LINEAR, the boundary color is used.
These methods set and retrieve the state of texture mapping for this Texture
object. A value of true means that texture mapping is enabled, false means that
texture mapping is disabled.
public void setMipMapMode(int mipMapMode)
public int getMipMapMode()
These methods set and retrieve the mipmap mode for texture mapping for this
Texture object. The mipmap mode is either BASE_LEVEL or MULTI_LEVEL_MIP_
MAP.
This method retrieves the number of mipmap levels needed for this Texture
object.
Constructors
The Texture2D object has the following constructors.
public Texture2D()
This constructor is not very useful as the default width and height are 0.
public Texture2D(int mipmapMode, int format, int width, int height)
Constructs and initializes a Texture2D object with the specified attributes. The
mipmapMode parameter is either BASE_LEVEL or MULTI_LEVEL_MIPMAP. The for-
mat parameter is one of the following: INTENSITY, LUMINANCE, ALPHA, LUMI-
NANCE_ALPHA, RGB, or RGBA.
Constructors
The Texture3D object has the following constructors.
public Texture3D()
Constructs and initializes a Texture3D object using the specified attributes. The
mipmapMode parameter is either BASE_LEVEL or MULTI_LEVEL_MIPMAP. The for-
mat parameter is one of INTENSITY, LUMINANCE, ALPHA, LUMINANCE_ALPHA, RGB,
or RGBA. The default value for a Texture3D object is as follows:
Methods
The Texture3D object has the following methods.
This parameter specifies the boundary mode for the R coordinate in this Texture
object. The boundary mode is as follows:
• CLAMP: Clamps texture coordinates to be in the range [0, 1]. A constant
boundary color is used for R values that fall outside this range.
• WRAP: Repeats the texture by wrapping texture coordinates that are out-
side the range [0, 1]. Only the fractional portion of the texture coordinates
is used; the integer portion is discarded.
Constants
The TexCoordGeneration object specifies the following variables.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read, and in some cases write, its individual compo-
nent field information.
Constructors
The TexCoordGeneration object has the following constructors.
public TexCoordGeneration()
specified fields. Default values are used for those state variables not specified in
the constructor. The parameters are as follows:
• genMode: Texture generation mode. One of OBJECT_LINEAR, EYE_LINEAR,
or SPHERE_MAP.
• format: Texture format (2D or 3D). Either TEXTURE_COORDINATE_2 or
TEXTURE_COORDINATE_3.
• planeS: Plane equation for the S coordinate.
• planeT: Plane equation for the T coordinate.
• planeR: Plane equation for the R coordinate.
Methods
The TexCoordGeneration object has the following methods.
This parameter enables or disables texture coordinate generation for this Appear-
ance component object. The value is true if texture coordinate generation is
enabled, false if texture coordinate generation is disabled.
This parameter specifies the format, or dimension, of the generated texture coor-
dinates. The format value is either TEXTURE_COORDINATE_2 or TEXTURE_COORD-
INATE_3.
This parameter specifies the texture coordinate generation mode. The value is
one of OBJECT_LINEAR, EYE_LINEAR, or SPHERE_MAP.
This parameter specifies the S coordinate plane equation. This plane equation is
used to generate the S coordinate in OBJECT_LINEAR and EYE_LINEAR texture
generation modes.
This parameter specifies the T coordinate plane equation. This plane equation is
used to generate the T coordinate in OBJECT_LINEAR and EYE_LINEAR texture
generation modes.
This parameter specifies the R coordinate plane equation. This plane equation is
used to generate the R coordinate in OBJECT_LINEAR and EYE_LINEAR texture
generation modes.
The TextureUnitState object defines all texture mapping state for a single texture
unit. An Appearance object contains an array of texture unit state objects to
define the state for multiple texture mapping units. The texture unit state consists
of the following:
• Texture – defines the texture image and filtering parameters used when tex-
ture mapping is enabled. These attributes are defined in a Texture object.
• Texture attributes – defines the attributes that apply to texture mapping,
such as the texture mode, texture transform, blend color, and perspective
correction mode. These attributes are defined in a TextureAttributes object.
• Texture coordinate generation – defines the attributes that apply to texture
coordinate generation, such as whether texture coordinate generation is en-
abled, coordinate format (2D or 3D coordinates), coordinate generation
mode (object linear, eye linear, or spherical reflection mapping), and the R,
S, and T coordinate plane equations. These attributes are defined in a Tex-
CoordGeneration object.
Constants
The TextureUnitState object has the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read or write this object’s texture, texture attribute, or
texture coordinate generation component information.
Constructors
The TextureUnitState object has the following constructors.
Methods
The TextureUnitState object has the following methods.
This method sets the texture, texture attributes, and texture coordinate generation
components in this TextureUnitState object to the specified component objects.
These methods set and retrieve the texture object. Setting it to null disables tex-
ture mapping for the texture unit corresponding to this TextureUnitState object.
These methods set and retrieve the textureAttributes object. Setting it to null will
result in default attribute usage for the texture unit corresponding to this Texture-
UnitState object.
These methods set and retrieve the texCoordGeneration object. Setting it to null
disables texture coordinate generation for the texture unit corresponding to this
TextureUnitState object.
Constants
The MediaContainer object has the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read or write its cached flag and its URL string.
Constructors
The MediaContainer object has the following constructors.
public MediaContainer()
Methods
The Sound object has the following methods.
These methods are deprecated in Java 3D version 1.2. Use the setURLString,
setURLObject, and getURLString methods instead.
These methods set and retrieve the string of URL containing the sound data.
public void setURLObject(URL url) New in 1.2
public URL getURLObject() New in 1.2
These methods set and retrieve the URL containing the sound data.
These methods set and retrieve the input stream object containing the sound data.
mate the speed of sound through air at room temperature, is multiplied by this
scale factor whenever the speed of sound is applied during spatialization calcula-
tions. Valid values are ≥ 0.0. Values > 1.0 increase the speed of sound, while val-
ues < 1.0 decrease its speed. A value of zero makes the sound silent (but the
sound continues to play).
8.1.17.2 Reverberation
Within Java 3D’s simple model for auralization, there are three components to
sound reverberation for a particular listening space:
• Delay time: Approximates the time from the start of a sound until it
reaches the listener after reflecting once off the surfaces in the region.
• Reflection coefficient: Attenuates the reverberated sound uniformly (for
all frequencies) as it bounces off surfaces.
• Feedback loop: Controls the maximum number of times a sound is
reflected off the surfaces.
None of these parameters are affected by sound position. Figure 8-2 shows the
interaction of these parameters.
1.0
Amplitude
Reflection
Coeff
Time
Effective zero
Reverberation (late reflections)
(Early) reflections
Direct signal
Reverb delay
Decay time
The reflection coefficient for reverberation is a single scale factor used to approx-
imate the overall reflective or absorptive characteristics of the surfaces in a rever-
beration region in which the listener is located. This scale factor is applied to the
sound’s amplitude regardless of the sound’s position. A value of 1.0 represents
The frequency scale factor can be used to increase or reduce the change of fre-
quency associated with the normal Doppler calculation, or to shift the pitch of
the sound directly if Doppler-effect is disabled. Values must be > 0.0 for sounds
to be heard. If the value is 0.0, sounds affected by this AuralAttributes object
are paused.
To simulate Doppler effect, the relative velocity (change in distance in the local
coordinate system between the sound source and the listener over time, in meters
per second) is calculated. This calculated velocity is multiplied by the given
velocity scale factor. Values must be ≥ 0.0. If the scale factor value is 0.0, Dop-
pler effect is not calculated or applied to the sound.
Constants
The AuralAttributes object has the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read or write the associated parameters.
Constructors
The AuralAttributes object has the following constructors.
public AuralAttributes()
Construct and initialize a new AuralAttributes object using the specified parame-
ters.
Methods
The AuralAttributes object has the following methods.
This parameter specifies an amplitude scale factor applied to the sound. Valid
values are ≥ 0.0.
The rolloff scale factor is used to model atmospheric changes from the normal
speed of sound. The base value of 0.344 meters per millisecond is used to
approximate the speed factor whenever the speed of sound is applied during spa-
tialization calculations. Valid values are ≥ 0.0. Values > 1.0 increase the speed of
sound; a value of 0.0 makes the sound silent (but the sound continues to play).
This parameter specifies an average amplitude scale factor for all sound waves
(independent of their frequencies) as they reflect off all surfaces within the acti-
vation region in which the listener is located. There is currently no method to
assign different reflective audio properties to individual surfaces. The range of
values is 0.0 to 1.0. A value of 0.0 represents a fully absorptive surface (no sound
waves reflect off), while a value of 1.0 represents a fully reflective surface
(amplitudes of sound waves reflecting off surfaces are not decreased).
This parameter specifies the delay time between each order of reflection while
reverberation is being rendered. In the first form of setReverbDelay, an explicit
delay time is given in milliseconds. In the second form, a reverberation bounds
volume is specified, and then the delay time is calculated, becoming the new
reverb time delay. A value of 0.0 for delay time disables reverberation.
These methods set and retrieve the reverberation bounds volume. In this form the
reverberation bounds volume parameter is used to calculate the reverb delay time
and the reverb decay. Specification of a non-null bounding volume causes the
explicit values given for reverb delay and decay to be overridden by the implicit
values calculated from these bounds.
This parameter specifies the maximum number of times reflections will be added
to the reverberation being calculated. When the amplitude of the n-th reflection
reaches effective zero, no further reverberations need be added to the sound
image. A value of 0 disables reverberation. A value of –1 specifies that the rever-
beration calculations will loop indefinitely, until the n-th reflection term reaches
effective zero.
This parameter specifies a (distance, filter) attenuation pairs array. If this is not
set, no distance filtering is performed (equivalent to using a distance filter of
Sound.NO_FILTER for all distances). Currently, this filter is a low-pass cutoff fre-
quency. This array of pairs defines a piecewise linear slope for a range of values.
This attenuation array is similar to the PointSound node’s distanceAttenuation
pair array, except that frequency values are paired with distances in this list.
Using these pairs, distance-based low-pass frequency filtering can be applied
during sound rendering. Distances, specified in the local coordinate system in
meters, must be > 0. Frequencies (in Hz) must be > 0.
If the distance from the listener to the sound source is less than the first distance
in the array, the first filter is applied to the sound source. This creates a spherical
region around the listener within which a sound is uniformly attenuated by the
first filter in the array. If the distance from the listener to the sound source is
greater than the last distance in the array, the last filter is applied to the sound
source.
The first form of setDistanceFilter takes these pairs of values as an array of
Point2f. The second form accepts two separate arrays for these values. The dis-
tance and frequencyCutoff arrays should be of the same length. If the fre-
quencyCutoff array length is greater than the distance array length, the
frequencyCutoff array elements beyond the length of the distance array are
ignored. If the frequencyCutoff array is shorter than the distance array, the
last frequencyCutoff array value is repeated to fill an array of length equal to
the distance array.
The getDistanceFilterLength method returns the length of the distance filter
arrays. Arrays passed into getDistanceFilter methods should all be at least
this size.
There are two methods for getDistanceFilter, one returning an array of points,
the other returning separate arrays for each attenuation component.
Distance elements in this array of pairs are a monotonically increasing set of
floating-point numbers measured from the location of the sound source. Fre-
quency cutoff elements in this list of pairs can be any positive float. While for
most applications this list of values will usually be monotonically decreasing,
they do not have to be.
This parameter specifies a scale factor applied to the frequency of sound during
rendering playback. If the Doppler effect is disabled, this scale factor can be used
to increase or decrease the original pitch of the sound. During rendering, this
scale factor expands or contracts the usual frequency shift applied to the sound
source due to Doppler-effect calculations. Valid values are ≥ 0.0; a value of 0.0
pauses the sound.
This parameter specifies a scale factor applied to the relative velocity of the
sound relative to the listener’s position and movement in relation to the sound’s
position and movement over time. This scale factor is multiplied by the calcu-
lated velocity portion of the Doppler-effect equation used during sound render-
ing. This allows the application to exaggerate or reduce the relative velocity
calculated by the standard Doppler equation. Valid values are ≥ 0.0. A value of
0.0 disables any Doppler calculation.
Component using a format of FORMAT_RGB5 can represent red, green, and blue
values between 0 and 31, while an ImageComponent using a format of FORMAT_
RGB8 can represent color values between 0 and 255. Even when byte values are
used to create a RenderedImage with 8-bit color components, the resulting colors
(bytes) are interpreted as if they were unsigned. Values greater than 127 can be
assigned to a byte variable using a type cast. For example:
byteVariable = (byte) intValue;// intValue can be > 127
Constants
The ImageComponent object has the following flags:
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read the associated parameters.
The ImageComponent object specifies the following variables, used to define 2D
or 3D ImageComponent classes. These variables specify the format of the pixel
data.
Specifies that each pixel contains three eight-bit channels, one each for red,
green, and blue. This is the same as FORMAT_RGB8.
Specifies that each pixel contains four eight-bit channels, one each for red, green,
blue, and alpha. This is the same as FORMAT_RGBA8.
Specifies that each pixel contains three eight-bit channels, one each for red,
green, and blue. This is the same as FORMAT_RGB.
Specifies that each pixel contains four eight-bit channels, one each for red, green,
blue, and alpha. This is the same as FORMAT_RGBA.
Specifies that each pixel contains three five-bit channels, one each for red, green,
and blue.
Specifies that each pixel contains three five-bit channels, one each for red, green,
and blue, and a one-bit channel for alpha.
Specifies that each pixel contains three four-bit channels, one each for red, green,
and blue.
Specifies that each pixel contains four four-bit channels, one each for red, green,
blue, and alpha.
Specifies that each pixel contains two four-bit channels, one each for luminance
and alpha.
Specifies that each pixel contains two eight-bit channels, one each for luminance
and alpha.
Specifies that each pixel contains two three-bit channels, one each for red and
green, and a two-bit channel for blue.
Specifies that each pixel contains one eight-bit channel. The channel can be used
for only luminance, alpha, or intensity.
Constructors
The ImageComponent object defines the following constructor.
Constructs an image component object using the specified format, width, height,
byReference flag, and yUp flag.
Methods
The ImageComponent object defines the following methods.
These methods retrieve the width, height, and format of this image component
object.
This method retrieves the data access mode for this ImageComponent object.
BufferedImage bi;
RenderedImage ri = bi;
ImageComponent2D ic;
Constructors
The ImageComponent2D object defines the following constructors.
The first constructor constructs a 2D image component object using the specified
format, width, height, byReference flag, and yUp flag, and a null image. The sec-
ond and third constructors construct a 2D image component object using the
specified format, image, byReference flag, and yUp flag.
Methods
The ImageComponent2D object defines the following methods.
These methods set the image in this ImageComponent2D object to the specified
BufferedImage or RenderedImage object. If the data access mode is not by-refer-
ence, the image data is copied into this object. If the data access mode is by-ref-
erence, a reference to the image is saved, but the data is not necessarily copied.
These methods retrieve the image from this ImageComponent2D object. If the
data access mode is not by-reference, a copy of the image is made. If the data
access mode is by-reference, the reference is returned.
BufferedImage bi;
RenderedImage ri = bi;
ImageComponent3D ic;
Constructors
The ImageComponent3D object defines the following constructors.
Constructs and initializes a 3D image component object using the specified for-
mat, width, height, and depth. Default values are used for all other parameters.
The default values are as follows:
This constructor constructs a 3D image component object using the specified for-
mat, width, height, depth, byReference flag, and yUp flag. Default values are
used for all other parameters.
These two constructors construct a 3D image component object using the speci-
fied format, BufferedImage or RenderedImage array, byReference flag, and yUp
flag. Default values are used for all other parameters.
Methods
The ImageComponent3D object defines the following methods.
These methods set the array of images in this image component to the specified
array of RenderedImage or BufferedImage objects. If the data access mode is not
by-reference, the data is copied into this object. If the data access mode is by-ref-
erence, a shallow copy of the array of references to the objects is made, but the
data is not necessarily copied.
These methods set this image component at the specified index to the specified
RenderedImage or BufferedImage object. If the data access mode is not by-refer-
ence, the data is copied into this object. If the data access mode is by-reference,
a reference to the image is saved, but the data is not necessarily copied.
Constants
The DepthComponent object has the following flags:
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read the associated parameters.
Methods
Constructors
The DepthComponentFloat object defines the following constructors.
Constructs a new floating-point depth (Z-buffer) component object with the spec-
ified width and height.
Methods
These methods set and retrieve the specified depth data for this object.
Constructors
The DepthComponentInt object defines the following constructor.
Constructs a new integer depth (Z-buffer) component object with the specified
width and height.
Methods
These methods set and retrieve the specified depth data for this object.
Constructors
The DepthComponentNative object defines the following constructor.
Constructs a new native depth (Z-buffer) component object with the specified
width and height.
Constructors
The Bounds object defines the following constructor.
public Bounds()
Methods
The Bounds object defines the following methods.
This method sets the value of this Bounds object to enclose the specified bound-
ing object.
These methods test for the intersection of this Bounds object with a ray, a point,
another Bounds object, or an array of Bounds objects, respectively.
This method finds the closest bounding object that intersects this bounding
object.
These methods combine this Bounds object with a bounding object, an array of
bounding objects, a point, or an array of points, respectively.
The first method tranforms a Bounds object so that it bounds a volume that is the
result of transforming the given bounding object by the given transform. The sec-
ond method transforms the Bounds object by the given transform.
This method indicates whether the specified bounds object is equal to this
Bounds object. They are equal if both the specified bounds object and this
Bounds are instances of the same Bounds subclass and all of the data members
of bounds are equal to the corresponding data members in this Bounds.
This method returns a hash code for this Bounds object based on the data values
in this object. Two different Bounds objects of the same type with identical data
values (i.e., Bounds.equals returns true) will return the same hash code. Two
Bounds objects with different data members may return the same hash code
value, although this is not likely.
This method tests whether the bounds is empty. A bounds is empty if it is null
(either by construction or as the result of a null intersection) or if its volume is
negative. A bounds with a volume of zero is not empty.
Constructors
The BoundingBox object defines the following constructors.
public BoundingBox()
public BoundingBox(Point3d lower, Point3d upper)
public BoundingBox(Bounds boundsObject)
public BoundingBox(Bounds bounds[])
The first constructor constructs and initializes a 2X unity BoundingBox about the
origin. The second constructor constructs and initializes a BoundingBox from the
given minimum and maximum in x, y, and z. The third constructor constructs and
initializes a BoundingBox from a bounding object. The fourth constructor con-
structs and initializes a BoundingBox from an array of bounding objects.
Methods
The BoundingBox object defines the following methods.
Sets the value of this bounding region to enclose the specified bounding object.
These methods combine this bounding box with a bounding object, an array of
bounding objects, a point, or an array of points, respectively.
The first method transforms a bounding box so that it bounds a volume that is the
result of transforming the given bounding object by the given transform. The sec-
ond method transforms the bounding box by the given transform.
These methods test for the intersection of this bounding box with a ray, a point,
another Bounds object, and an array of Bounds objects, respectively.
These methods compute a new BoundingBox that bounds the volume created by
the intersection of this BoundingBox with another Bounds object or array of
Bounds objects.
This method finds the closest bounding object that intersects this bounding box.
This method indicates whether the specified bounds object is equal to this
BoundingBox object. They are equal if the specified bounds object is an instance
of BoundingBox and all of the data members of bounds are equal to the corre-
sponding data members in this BoundingBox.
This method returns a hash code value for this BoundingBox object based on the
data values in this object. Two different BoundingBox objects with identical data
values (i.e., BoundingBox.equals returns true) will return the same hash code
value. Two BoundingBox objects with different data members may return the
same hash code value, although this is not likely.
This method tests whether the bounding box is empty. A bounding box is empty
if it is null (either by construction or as the result of a null intersection) or if its
volume is negative. A bounding box with a volume of zero is not empty.
Constructors
The BoundingSphere object defines the following constructors.
public BoundingSphere()
public BoundingSphere(Point3D center, double radius)
public BoundingSphere(Bounds boundsObject)
public BoundingSphere(Bounds boundsObjects[])
Methods
The BoundingSphere object defines the following methods.
Sets the value of this bounding sphere to enclose the volume specified by the
Bounds object.
These methods combine this bounding sphere with a bounding object, an array
of bounding objects, a point, or an array of points, respectively.
These methods test for the intersection of this bounding sphere with the given
ray, point, another Bounds object, or an array of Bounds objects.
These methods compute a new BoundingSphere that bounds the volume created
by the intersection of this BoundingSphere with another Bounds object or array
of Bounds objects.
This method finds the closest bounding object that intersects this bounding
sphere.
The first method transforms a bounding sphere so that it bounds a volume that is
the result of transforming the given bounding object by the given transform. The
second method transforms the bounding sphere by the given transform. Note that
when transforming a bounding sphere by a transformation matrix containing a
nonuniform scale or a shear, the result is a bounding sphere with a radius equal
to the maximal scale in any direction—the bounding sphere does not transform
into an ellipsoid.
This method indicates whether the specified bounds object is equal to this
BoundingSphere object. They are equal if the specified bounds object is an
instance of BoundingSphere and all of the data members of bounds are equal to
the corresponding data members in this BoundingSphere.
This method returns a hash code value for this BoundingSphere object based on
the data values in this object. Two different BoundingSphere objects with identi-
cal data values (i.e., BoundingSphere.equals returns true) will return the same
hash code value. Two BoundingSphere objects with different data members may
return the same hash code value, although this is not likely.
This method tests whether the bounding sphere is empty. A bounding sphere is
empty if it is null (either by construction or as the result of a null intersection)
or if its volume is negative. A bounding sphere with a volume of zero is not
empty.
Constructors
The BoundingPolytope object defines the following constructors.
public BoundingPolytope()
planes[0] x ≤ 1 (1,0,0,–1)
planes[1] –x ≤ 1 (–1,0,0,–1)
planes[2] y ≤ 1 (0,1,0,–1)
planes[3] –y ≤ 1 (0,–1,0,–1)
planes[4] z ≤ 1 (0,0,1,–1)
planes[5] –z ≤ 1 (0,0,–1,–1)
Methods
The BoundingPolytope object defines the following methods.
These methods set and retrieve the bounding planes for this BoundingPolytope
object.
This method returns the number of bounding planes for this bounding polytope.
This method sets the planes for this BoundingPolytope by keeping its current
number and direction of the planes and computing new plane positions to
enclose the given Bounds object.
The first method tranforms a bounding polytope so that it bounds a volume that
is the result of transforming the given bounding object by the given transform.
The second method transforms the bounding polytope by the given transform.
These methods test for the intersection of this BoundingPolytope with the given
ray, point, another Bounds object, or array of Bounds objects, respectively.
These methods compute a new BoundingPolytope that bounds the volume cre-
ated by the intersection of this BoundingPolytope with another Bounds object or
array of Bounds objects.
This method finds the closest bounding object that intersects this bounding poly-
tope.
This method indicates whether the specified bounds object is equal to this
BoundingPolytope object. They are equal if the specified bounds object is an
instance of BoundingPolytope and all of the data members of bounds are equal
to the corresponding data members in this BoundingPolytope.
This method returns a hash code value for this BoundingPolytope object based
on the data values in this object. Two different BoundingPolytope objects with
identical data values (i.e., BoundingPolytope.equals returns true) will return
the same hash code value. Two BoundingPolytope objects with different data
members may return the same hash code value, although this is not likely.
This method tests whether the bounding polytope is empty. A bounding polytope
is empty if it is null (either by construction or as the result of a null intersection)
or if its volume is negative. A bounding polytope with a volume of zero is not
empty.
Constants
m 00 m 01 m 02 m 03 x x′
m 10 m 11 m 12 m 13 ⋅ y = y ′
m 20 m 21 m 22 m 23 z z′
m 30 m 31 m 32 m 33 w w′
x′ = m 00 ⋅ x + m 01 ⋅ y + m 02 ⋅ z + m 03 ⋅ w
y′ = m 10 ⋅ x + m 11 ⋅ y + m 12 ⋅ z + m 13 ⋅ w
z′ = m 20 ⋅ x + m 21 ⋅ y + m 22 ⋅ z + m 23 ⋅ w
w′ = m 30 ⋅ x + m 31 ⋅ y + m 32 ⋅ z + m 33 ⋅ w
Constructors
The Transform3D object defines the following constructors.
public Transform3D()
This constructs and initializes a new Transform3D object to the identity transfor-
mation.
This constructs and initializes a new Transform3D object from the specified
transform.
These construct and initialize a new Transform3D object from the rotation
matrix, translation, and scale values. The scale is applied only to the rotational
component of the matrix (upper 3 × 3) and not to the translational components of
the matrix.
These construct and initialize a new Transform3D object from the 4 × 4 matrix.
The type of the constructed transform is classified automatically.
These construct and initialize a new Transform3D object from the array of length
16. The top row of the matrix is initialized to the first four elements of the array,
and so on. The type of the constructed transform is classified automatically.
These construct and initialize a new Transform3D object from the quaternion q1,
the translation t1, and the scale s. The scale is applied only to the rotational
components of the matrix (the upper 3 × 3) and not to the translational compo-
nents of the matrix.
This constructs and initializes a new Transform3D object and initializes it to the
upper 4 × 4 of the specified GMatrix. If the specified matrix is smaller than
4 × 4, the remaining elements in the transformation matrix are assigned to zero.
Methods
The Transform3D object defines the following methods.
This method retrieves the type of this matrix. The type is an ORed bitmask of all
of the type classifications to which it belongs.
This method retrieves the least general type of this matrix. The order of general-
ity from least to most is as follows: ZERO, IDENTITY, SCALE, TRANSLATION,
ORTHOGONAL, RIGID, CONGRUENT, and AFFINE. If the matrix is ORTHOGONAL, call-
ing the method getDeterminantSign will yield more information.
This method returns the sign of the determinant of this matrix. A return value of
true indicates a positive determinant. A return value of false indicates a nega-
tive determinant. In general, an orthogonal matrix with a positive determinant is
a pure rotation matrix; an orthogonal matrix with a negative determinant is both
a rotation and a reflection matrix.
This method sets the rotational component (upper 3 × 3) of this transform to the
rotation matrix converted from the Euler angles provided. The euler parameter
is a Vector3d consisting of three rotation angles applied first about the X, then
the Y, then the Z axis. These rotations are applied using a static frame of refer-
ence. In other words, the orientation of the Y rotation axis is not affected by the
X rotation and the orientation of the Z rotation axis is not affected by the X or Y
rotation.
These methods set the rotational component (upper 3 × 3) of this transform to the
values in the specified matrix; the other elements of this transform are
unchanged. A singular value decomposition is performed on this object’s upper
3 × 3 matrix to factor out the scale, then this object’s upper 3 × 3 matrix compo-
nents are replaced by the input rotational components, and finally the scale is
reapplied to the rotational components.
These methods set the rotational component (upper 3 × 3) of this transform to the
appropriate values derived from the specified quaternion; the other elements of
this transform are unchanged. A singular value decomposition is performed on
this object’s upper 3 × 3 matrix to factor out the scale, then this object’s upper
3 × 3 matrix components are replaced by the matrix equivalent of the quaternion,
and finally the scale is reapplied to the rotational components.
These methods set the rotational component (upper 3 × 3) of this transform to the
appropriate values derived from the specified axis-angle; the other elements of
this transform are unchanged. A singular value decomposition is performed on
this object’s upper 3 × 3 matrix to factor out the scale, then this object's upper
3 × 3 matrix components are replaced by the matrix equivalent of the axis-angle,
and finally the scale is reapplied to the rotational components.
The set method sets the scale component of this transform by factoring out the
current scale from the rotational component and multiplying by the new scale.
The get method performs an SVD normalization of this transform to calculate
and return the scale factor; this transform is not modified. If the matrix has non-
uniform scale factors, the largest of the x, y, and z scale factors will be returned.
The set method sets the possibly non-uniform scale component to the current
transform. Any existing scale is first factored out of the existing transform before
the new scale is applied. The get method returns the possibly non-uniform scale
components of the current transform and places them into the scale vector.
The first method scales transform t1 by a uniform scale matrix with scale factor
s, then adds transform t2 (this = S * t1 + t2). The second method scales this
transform by a uniform scale matrix with scale factor s, then adds transform t1
(this = S * this + t1).
The set methods replace the upper 3 × 3 matrix values of this transform with the
values in the matrix m1. The get methods retrieve the upper 3 × 3 matrix values
of this transform and place them in the matrix m1.
The first add method adds this transform to the transform t1 and places the result
back into this. The second add method adds the transforms t1 and t2 and
places the result into this. The first sub method subtracts transform t1 from this
transform and places the result back into this. The second sub method subtracts
transform t2 from t1 and places the result into this.
The first method adds a scalar to each component of this transform. The second
method adds a scalar to each component of the transform t1 and places the result
into this. Transform t1 is not modified.
The first method transposes this matrix in place. The second method transposes
transform t1 and places the value into this transform. The transform t1 is not
modified.
These three methods set the value of this matrix to a rotation matrix about the
specified axis. The matrices rotate in a counter-clockwise (right-handed) direc-
tion. The angle to rotate is specified in radians.
This method modifies the translational components of this transform to the val-
ues of the argument. The other values of this transform are not modified.
These methods set the value of this transform to the matrix conversion of the
quaternion argument.
public final void set(Quat4d q1, Vector3d t1, double s)
public final void set(Quat4f q1, Vector3d t1, double s)
public final void set(Quat4f q1, Vector3f t1, float s)
These methods set the value of this matrix from the rotation expressed by the
quaternion q1, the translation t1, and the scale s.
These methods set the translational value of this matrix to the specified vector
parameter values and set the other components of the matrix as if this transform
were an identity matrix.
These methods set the value of this transform to a scale and translation matrix;
the translation is scaled by the scale factor and all of the matrix values are mod-
ified.
public final void set(Transform3D t1)
This method sets the matrix, type, and state of this transform to the matrix, type,
and state of the transform t1.
public final void set(double matrix[])
public final void set(float matrix[])
These methods set the matrix values of this transform to the specified matrix val-
ues.
The first method sets the value of this transform to a uniform scale; all of the
matrix values are modified. The next two methods set the value of this transform
to a scale and translation matrix; the scale is not applied to the translation and all
of the matrix values are modified.
public final void set(Matrix4d m1)
public final void set(Matrix4f m1)
These methods set the matrix values of this transform to the matrix values in the
specified matrix.
These methods set the rotational and scale components (upper 3 × 3) of this
transform to the matrix values in the specified matrix. The remaining matrix val-
ues are set to the identity matrix. All values of the matrix are modified.
These methods set the value of this matrix from the rotation expressed by the
rotation matrix m1, the translation t1, and the scale s. The scale is only applied to
the rotational component of the matrix (upper 3 × 3) and not to the translational
component of the matrix.
These methods set the matrix values of this transform to the matrix values in the
specified matrix. The GMatrix object must specify a 4 × 4, 3 × 4, or 3 × 3 matrix.
These methods set the rotational component (upper 3 × 3) of this transform to the
matrix conversion of the specified axis-angle argument. The remaining matrix
values are set to the identity matrix. All values of the matrix are modified.
These methods place the values of this transform into the specified matrix of
length 16. The first four elements of the array will contain the top row of the
transform matrix, and so on.
public final void get(Matrix4d matrix)
public final void get(Matrix4f matrix)
These methods place the values of this transform into the matrix argument.
public final void get(Matrix3d m1)
public final void get(Matrix3f m1)
These methods place the normalized rotational component of this transform into
the 3 × 3 matrix argument.
These methods place the normalized rotational component of this transform into
the m1 parameter and the translational component into the t1 parameter.
These methods perform an SVD normalization of this matrix to acquire the nor-
malized rotational component. The values are placed into the quaternion q1
parameter.
The first method inverts this transform in place. The second method sets the
value of this transform to the inverse of the transform t1. Both of these methods
use the transform type to determine the optimal algorithm for inverting the trans-
form.
The first method sets the value of this transform to the result of multiplying itself
with transform t1 (this = this * t1). The second method sets the value of this
transform to the result of multiplying transform t1 by transform t2
(this = t1 * t2).
The first method multiplies this transform by the scalar constant. The second
method multiplies transform t1 by the scalar constant and places the value into
this transform.
The first method multiplies this transform by the inverse of transform t1 and
places the result into this transform (this = this * t1–1). The second method mul-
tiplies transform t1 by the inverse of transform t2 and places the result into this
transform (this = t1 * t2–1).
Both of these methods use an SVD normalization. The first normalize method
normalizes the rotational components (upper 3 × 3) of matrix this and places
the results back into this. The second normalize method normalizes the rota-
tional components (upper 3 × 3) of transform t1 and places the result in this.
Both of these methods use a cross-product (CP) normalization. The first normal-
izeCP method normalizes the rotational components (upper 3 × 3) of this trans-
form and places the result into this transform. The second normalizeCP method
normalizes the rotational components (upper 3 × 3 of transform t1 and places the
result into this transform.
The first method returns true if all of the data members of transform t1 are
equal to the corresponding data members in this transform. The second method
returns true if the Object o1 is of type Transform3D and all of the data members
of o1 are equal to the corresponding data members in this Transform3D.
This method returns true if the L∞ distance between this transform and trans-
form m1 is less than or equal to the epsilon parameter; otherwise, it returns
false. The L∞ distance is equal to:
This method returns a hash number based on the data values in this object. Two
different Transform3D objects with identical data values (that is, true is returned
for trans.equals(Transform3D)) will return the same hash number. Two
Transform3D objects with different data members may return the same hash
value, although this is not likely.
The first two methods transform the vector vec by this transform and place the
result into vecOut. The last two methods transform the vector vec by this trans-
form and place the result back into vec.
The first two methods transform the point parameter by this transform and place
the result into pointOut. The last two methods transform the point parameter
by this transform and place the result back into point. In both cases, the fourth
element of the point input parameter is assumed to be 1.
The first two methods transforms the normal parameter by this transform and
place the value into normalOut. The third and fourth methods transform the nor-
mal parameter by this transform and place the value back into normal.
This is a utility method that specifies the position and orientation of a viewing
transformation. It works very much like the similar function in OpenGL. The
inverse of this transform can be used to control the ViewPlatform object within
the scene graph. Alternatively, this transform can be passed directly to the View’s
VpcToEc transform via the compatibility mode viewing functions defined in
Section C.11.2, “Using the Camera-based View Model.”
directly set the View’s left and right projection transforms when in compatibility
mode. See Section C.11.2, “Using the Camera-based View Model,” for details.
The fovx parameter specifies the field of view in the x direction in radians.
SceneGraphObject
NodeComponent
Geometry
CompressedGeometry
Raster
Text3D
GeometryArray
GeometryStripArray
LineStripArray
TriangleStripArray
TriangleFanArray
LineArray
PointArray
QuadArray
TriangleArray
IndexedGeometryArray
IndexedGeometryStripArray
IndexedLineStripArray
IndexedTriangleStripArray
IndexedTriangleFanArray
IndexedLineArray
IndexedPointArray
IndexedQuadArray
IndexedTriangleArray
Constants
The Geometry object defines the following constant.
This flag specifies that this Geometry object allows the intersect operation.
Constructors
public Geometry()
Constants
The GeometryArray object defines the following flags.
These flags specify that the GeometryArray object allows reading or writing of
the array of coordinates.
These flags specify that the GeometryArray object allows reading or writing of
the array of colors.
These flags specify that the GeometryArray object allows reading or writing of
the array of normals.
These flags specify that the GeometryArray object allows reading or writing of
the array of texture coordinates.
These flags specify that the GeometryArray object allows reading or writing of
any count or initial index data (such as the vertex count) associated with the
GeometryArray.
This flag specifies that the GeometryArray object allows reading the vertex for-
mat associated with the GeometryArray.
These flags specify that this GeometryArray allows reading or writing the geom-
etry data reference information for this object. The second flag also enables writ-
ing the referenced data itself, via the GeometryUpdater interface. These are only
used in by-reference geometry mode.
This flag specifies that the position, color, normal, and texture coordinate data for
this GeometryArray are accessed by reference.
This flag specifies that the position, color, normal, and texture coordinate data for
this GeometryArray are accessed via a single interleaved, floating-point array
reference. All of the data values for each vertex are stored in consecutive mem-
ory locations. This is only valid in conjunction with the BY_REFERENCE flag.
Constructors
The GeometryArray object has the following constructors.
Constructs an empty GeometryArray object with the specified vertex format and
number of vertices. The vertexCount parameter specifies the number of vertex
elements in this array. The vertexFormat parameter is a mask indicating which
vertex components are present in each vertex. The vertex format is specified as a
set of flags that are bitwise ORed together to describe the per-vertex data. The
following vertex formats are supported.
Methods
GeometryArray methods provide access (get and set methods) to individual
vertex component arrays in two different modes: as individual elements or as
arrays of multiple elements.
This method updates geometry array data that is accessed by reference. This
method calls the updateData method of the specified GeometryUpdater object to
synchronize updates to vertex data that is referenced by this GeometryArray
object. Applications that wish to modify such data must perform all updates via
this method.
This method may also be used to atomically set multiple references (e.g., to
coordinate and color arrays) or atomically change multiple data values through
the geometry data copying methods.
Sets or retrieves the valid vertex count for this GeometryArray object. This count
specifies the number of vertices actually used in rendering or other operations
such as picking and collision. This attribute is initialized to vertexCount.
Sets or retrieves the initial vertex index for this GeometryArray object. This
index specifies the first vertex within this geometry array that is actually used in
rendering or other operations such as picking and collision. This attribute is ini-
tialized to 0. This attribute is only used when the data mode for this geometry
array object is not BY_REFERENCE.
Sets or retrieves the coordinate associated with the vertex at the specified index
of this object. The index parameter is the vertex index in this geometry array.
The coordinate parameter is an array of three values containing the new coordi-
nate.
Sets or retrieves the coordinate associated with the vertex at the specified index.
The index parameter is the vertex index in this geometry array. The coordinate
parameter is a vector containing the new coordinate.
Sets or retrieves the coordinates associated with the vertices starting at the spec-
ified index. The index parameter is the starting vertex index in this geometry
array. The coordinates parameter is an array of 3n values containing n new
coordinates. The length of the coordinates array determines the number of ver-
tices copied.
Sets or retrieves the coordinates associated with the vertices starting at the spec-
ified index. The index parameter is the starting vertex index in this geometry
array. The coordinates parameter is an array of points containing new coordi-
nates. The length of the coordinates array determines the number of vertices
copied.
These methods set the coordinates associated with the vertices starting at the
specified index for this object, using coordinate data starting from vertex index
start for length vertices. The index parameter is the starting destination vertex
index in this geometry array.
Sets or retrieves the color associated with the vertex at the specified index. The
index parameter is the vertex index in this geometry array. The color parameter
is an array of three or four values containing the new color.
Sets or retrieves the color associated with the vertex at the specified index. The
index parameter is the vertex index in this geometry array. The color parameter
is a vector containing the new color.
Sets or retrieves the colors associated with the vertices starting at the specified
index. The index parameter is the starting vertex index in this geometry array.
The colors parameter is an array of 3n or 4n values containing n new colors.
The length of the colors array determines the number of vertices copied.
Sets or retrieves the colors associated with the vertices starting at the specified
index. The index parameter is the starting vertex index in this geometry array.
The colors parameter is an array of vectors containing the new colors. The
length of the colors array determines the number of vertices copied.
These methods set the colors associated with the vertices starting at the specified
index for this object, using data in colors starting at index start for length
colors. The index parameter is the starting destination vertex index in this geom-
etry array. The colors parameter is an array of 3n or 4n values containing n new
colors.
public void setColors(int index, Color3f colors[], int start,
int length)
public void setColors(int index, Color4f colors[], int start,
int length)
public void setColors(int index, Color3b colors[], int start,
int length)
public void setColors(int index, Color4b colors[], int start,
int length)
These methods set the colors associated with the vertices starting at the specified
index for this object, using data in colors starting at index start for length
colors. The index parameter is the starting destination vertex index in this geom-
etry array. The colors parameter is an array of vectors containing new colors.
Sets or retrieves the normal associated with the vertex at the specified index. The
index parameter is the vertex index in this geometry array. The normal parame-
ter is the new normal.
Sets or retrieves the normal associated with the vertex at the specified index. The
index parameter is the vertex index in this geometry array. The normal parame-
ter is a vector containing the new normal.
Sets or retrieves the normals associated with the vertices starting at the specified
index. The index parameter is the starting vertex index in this geometry array.
The normals parameter is an array of 3n values containing n new normals. The
length of the normals array determines the number of vertices copied.
Sets or retrieves the normals associated with the vertices starting at the specified
index. The index parameter is the starting vertex index in this geometry array.
The normals parameter is an array of vectors containing new normals. The
length of the normals array determines the number of vertices copied.
These methods set the normals associated with the vertices starting at the speci-
fied index for this object, using data in normals starting at index start and end-
ing at index start+length. The index parameter is the starting destination
vertex index in this geometry array.
This method retrieves the number of texture coordinate sets in this Geometr-
yArray object.
This method retrieves the length of the texture coordinate set mapping array of
this GeometryArray object.
This method retrieves the texture coordinate set mapping array from this Geom-
etryArray object.
These methods set and retrieve the texture coordinate associated with the vertex
at the specified index in the specified texture coordinate set for this object.
These methods set and retrieve the texture coordinates associated with the verti-
ces starting at the specified index in the specified texture coordinate set for this
object. The set methods copy the entire source array to this geometry array. For
the get methods, the length of the destination array determines the number of
texture coordinates copied.
These methods set and retrieve the texture coordinates associated with the verti-
ces starting at the specified index in the specified texture coordinate set for this
object using data in texCoords starting at index start and ending at index
start+length.
Sets or retrieves the initial coordinate index for this GeometryArray object. This
index specifies the first coordinate within the array of coordinates referenced by
this geometry array that is actually used in rendering or other operations such as
picking and collision. This attribute is initialized to 0. This attribute is only used
when the data mode for this geometry array object is BY_REFERENCE.
Sets or retrieves the initial color index for this GeometryArray object. This index
specifies the first color within the array of colors referenced by this geometry
array that is actually used in rendering or other operations such as picking and
collision. This attribute is initialized to 0. This attribute is only used when the
data mode for this geometry array object is BY_REFERENCE.
Sets or retrieves the initial normal index for this GeometryArray object. This
index specifies the first normal within the array of normals referenced by this
geometry array that is actually used in rendering or other operations such as
picking and collision. This attribute is initialized to 0. This attribute is only used
when the data mode for this geometry array object is BY_REFERENCE.
Sets or retrieves the initial texture coordinate index for the specified texture coor-
dinate set for this GeometryArray object. This index specifies the first texture
coordinate within the array of texture coordinates referenced by this geometry
array that is actually used in rendering or other operations such as picking and
collision. This attribute is initialized to 0. This attribute is only used when the
data mode for this geometry array object is BY_REFERENCE.
Sets or retrieves the coordinate array reference to the specified array. The array
contains x, y, and z values for each vertex (for a total of 3*n values, where n is
the number of vertices). Only one of coordRefFloat, coordRefDouble,
coordRef3f, or coordRef3d may be non-null (or they may all be null). An
attempt to set more than one of these attributes to a non-null reference will result
in an exception being thrown. If all coordinate array references are null, the
entire geometry array object is treated as if it were null—any Shape3D or Morph
node that uses this geometry array will not be drawn.
Sets or retrieves the coordinate array reference to the specified array. The array
contains a Point3f or Point3d object for each vertex. Only one of coordRef-
Float, coordRefDouble, coordRef3f, or coordRef3d may be non-null (or they
may all be null). An attempt to set more than one of these attributes to a non-null
reference will result in an exception being thrown. If all coordinate array refer-
ences are null, the entire geometry array object is treated as if it were null—any
Shape3D or Morph node that uses this geometry array will not be drawn.
Sets or retrieves the color array reference to the specified array. The array con-
tains red, green, blue, and, optionally, alpha values for each vertex (for a total of
3*n or 4*n values, where n is the number of vertices). Only one of colorRef-
Float, colorRefByte, colorRef3f, colorRef4f, colorRef3b, or colorRef4b
may be non-null (or they may all be null). An attempt to set more than one of
these attributes to a non-null reference will result in an exception being thrown.
If all color array references are null and colors are enabled (i.e., the vertexFormat
includes either COLOR_3 or COLOR_4), the entire geometry array object is treated
as if it were null—any Shape3D or Morph node that uses this geometry array
will not be drawn.
Sets or retrieves the color array reference to the specified array. The array con-
tains a Color 3f, Color4f, Color3b, or Color4b object for each vertex. Only one
of colorRefFloat, colorRefByte, colorRef3f, colorRef4f, colorRef3b, or
colorRef4b may be non-null (or they may all be null). An attempt to set more
than one of these attributes to a non-null reference will result in an exception
being thrown. If all color array references are null and colors are enabled (i.e.,
the vertexFormat includes either COLOR_3 or COLOR_4), the entire geometry array
object is treated as if it were null—any Shape3D or Morph node that uses this
geometry array will not be drawn.
Sets or retrieves the float normal array reference to the specified array. The array
contains floating-point nx, ny, and nz values for each vertex (for a total of 3*n
values, where n is the number of vertices). Only one of normalRefFloat or
normalRef3f may be non-null (or they may all be null). An attempt to set more
than one of these attributes to a non-null reference will result in an exception
being thrown. If all normal array references are null and normals are enabled
(i.e., the vertexFormat includes NORMAL), the entire geometry array object is
treated as if it were null—any Shape3D or Morph node that uses this geometry
array will not be drawn.
Sets or retrieves the normal array reference to the specified array. The array con-
tains a Vector3f object for each vertex. Only one of normalRefFloat or
normalRef3f may be non-null (or they may all be null). An attempt to set more
than one of these attributes to a non-null reference will result in an exception
being thrown. If all normal array references are null and normals are enabled
(i.e., the vertexFormat includes NORMAL), the entire geometry array object is
treated as if it were null—any Shape3D or Morph node that uses this geometry
array will not be drawn.
Sets or retrieves the float texture coordinate array reference for the specified tex-
ture coordinate set to the specified array. The array contains floating-point s, t,
and, optionally, r values for each vertex (for a total of 2*n or 3*n values, where
n is the number of vertices). Only one of texCoordRefFloat, texCoordRef2f,
or texCoordRef3f may be non-null (or they may all be null). An attempt to set
more than one of these attributes to a non-null reference will result in an excep-
tion being thrown. If all texCoord array references are null and texture coordi-
nates are enabled (i.e., the vertexFormat includes either TEXTURE_COORDINATE_2
or TEXTURE_COORDINATE_3), the entire geometry array object is treated as if it
were null – any Shape3D or Morph node that uses this geometry array will not
be drawn.
Sets the texture coordinate array reference for the specified texture coordinate set
to the specified array. The array contains a TexCoord2f or TexCoord3f object for
each vertex. Only one of texCoordRefFloat, texCoordRef2f, or texCoordRef3f
may be non-null (or they may all be null). An attempt to set more than one of
these attributes to a non-null reference will result in an exception being thrown.
If all texCoord array references are null and texture coordinates are enabled (i.e.,
the vertexFormat includes either TEXTURE_COORDINATE_2 or TEXTURE_
COORDINATE_3), the entire geometry array object is treated as if it were null –
any Shape3D or Morph node that uses this geometry array will not be drawn.
Sets or retrieves the interleaved vertices array reference to the specified array.
The vertex components must be stored in a predetermined order in the array. The
order is: texture coordinates, colors, normals, and positional coordinates. Only
those components that are enabled appear in the vertex. The number of words per
vertex depends on which vertex components are enabled. Texture coordinates, if
enabled, use two words per vertex for TEXTURE_COORDINATE_2 or three words
per vertex for TEXTURE_COORDINATE_3. Colors, if enabled, use three words per
vertex for COLOR_3 or four words per vertex for COLOR_4. Normals, if enabled,
use three words per vertex. Positional coordinates, which are always enabled, use
three words per vertex. For example, the format of interleaved data for a Geom-
etryArray object whose vertexFormat includes COORDINATES, COLOR_3, and NOR-
MALS would be: red, green, blue, Nx, Ny, Nz, x, y, z. All components of a vertex
are stored in adjacent memory locations. The first component of vertex 0 is
stored beginning at index 0 in the array. The first component of vertex 1 is stored
beginning at index words_per_vertex in the array. The total number of words
needed to store n vertices is words_per_vertex*n.
Methods
This method updates geometry data that is accessed by reference. This method is
called by the updateData method of a GeometryArray object to effect safe
updates to vertex data that is referenced by that object. Applications that wish to
modify such data must implement this method and perform all updates within it.
Constructors
Constructs an empty PointArray object with the specified vertex format and
number of vertices.
Constructors
Constructs an empty LineArray object with the specified vertex format and num-
ber of vertices.
Constructs an empty LineArray object with the specified number of vertices, and
vertex format, number of texture coordinate sets, and texture coordinate mapping
array.
Constructors
Constructs an empty TriangleArray object with the specified vertex format and
number of vertices.
Constructors
Constructs an empty QuadArray object with the specified vertex format and
number of vertices.
Constructors
The GeometryStripArray object has the following constructors.
by the stripVertexCounts array, must equal the total count of all vertices as
specified by the vertexCount parameter.
Methods
The GeometryStripArray object has the following methods.
This method gets an array containing a list of vertex counts for each strip.
Constructors
Constructors
Constructors
Constants
The IndexedGeometryArray object defines the following flags.
Constructors
The IndexedGeometryArray object has two constructors that accept the same
parameters as GeometryArray.
Methods
IndexedGeometryArray methods provide access (get and set methods) to the
individual vertex component index arrays that are used when rendering the
geometry. This access is allowed in two different modes: as individual index ele-
ments or as arrays of multiple index elements.
Sets or retrieves the coordinate index associated with the vertex at the specified
index.
Sets or retrieves the coordinate indices associated with the vertices starting at the
specified index.
Sets or retrieves the color index associated with the vertex at the specified index.
Sets or retrieves the color indices associated with the vertices starting at the spec-
ified index.
Sets or retrieves the normal index associated with the vertex at the specified
index.
Sets or retrieves the normal indices associated with the vertices starting at the
specified index.
These methods set and retrieve the texture coordinate index associated with the
vertex at the specified index in the specified texture coordinate set for this object.
These methods set and retrieve the texture coordinate indices associated with the
vertices starting at the specified index in the specified texture coordinate set for
this object.
Constructors
The IndexedPointArray object has the following constructors.
Constructors
The IndexedLineArray object has the following constructors.
Constructors
The IndexedTriangleArray object has the following constructors.
Constructors
The IndexedQuadArray object has the following constructors.
Constructors
The IndexedGeometryStripArray object has the following constructors.
Methods
The IndexedGeometryArrayStrip object has the following methods.
Constructors
The IndexedLineStripArray object has the following constructors.
Constructors
The IndexedTriangleStripArray object has the following constructors.
connected triangle fans. An array of per-strip index counts specifies where the
separate strips (fans) appear in the indexed vertex array. For every strip in the set,
each vertex, beginning with the third vertex in the array, defines a triangle to be
drawn using the current vertex, the previous vertex, and the first vertex. This can
be thought of as a collection of convex polygons.
Constructors
The IndexedTriangleFanArray object has the following constructors.
Constants
The CompressedGeometry object specifies the following variables.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read its individual component field information.
This flag specifies that this CompressedGeometry allows reading the geometry
data reference information for this object. This is only used in by-reference
geometry mode.
Constructors
Methods
This method retrieves the size, in bytes, of the compressed geometry buffer.
This method retrieves the header for this CompressedGeometry object (see
Section 8.2.21, “CompressedGeometryHeader Object”). The header is copied
into the CompressedGeometryHeader object provided.
This method retrieves the compressed geometry associated with the Com-
pressedGeometry object. Copies the compressed geometry from the Com-
pressedGeometry object into the given array.
This method retrieves the compressed geometry data reference with which this
CompressedGeometry object was constructed. It is only valid in by-reference
mode.
This method retrieves the data access mode for this CompressedGeometry object.
Version 1.2, March 2000 221
8.2.21 CompressedGeometryHeader Object NODE COMPONENT OBJECTS
Constants
These flags indicate whether RGB, alpha color, or normal information is initial-
ized in the compressed geometry buffer.
These indicate the major, minor, and minor-minor version numbers for the com-
pressed geometry format that was used to compress the geometry. If the version
number of compressed geometry is incompatible with the supported version of
compressed geometry in the current version of Java 3D, the compressed geome-
try obejct will not be rendered.
This flag describes the type of data in the compressed geometry buffer. Only one
type may be present in any given compressed geometry buffer.
This flag indicates whether a particular data component (for example, color) is
present in the compressed geometry buffer, preceding any geometric data. If a
222 The Java 3D API Specification
NODE COMPONENT OBJECTS Raster Object 8.2.22
particular data type is not present then this information will be inherited from the
Appearance object.
This flag indicates the size of the compressed geometry, in bytes, that needs to be
applied to every point in the compressed geometry buffer to restore the geometry
to its original (uncompressed) position.
This flag contains the offset in bytes of the start of the compressed geometry
from the beginning of the compressed geometry buffer.
These two flags specify two points that specify the upper and lower bounds of
the x, y, and z components for all positions in the compressed geometry buffer. If
null, a lower bound of (–1,–1,–1) and an upper bound of (1,1,1) is assumed. Java
3D will use this information to construct a bounding box around compressed
geometry objects that are used in nodes for which the auto compute bounds flag
is true. The default value for both points is null.
Constructor
public CompressedGeometryHeader()
The geometric extent of a Raster object is a single 3D point, specified by the ras-
ter position. This means that geometry-based picking or collision with a Raster
object will only intersect the object at this single point; the 2D raster image is
neither pickable nor collidable.
Constants
The Raster object defines the following flags.
These flags specify that the Raster object allows reading or writing of the posi-
tion, offset, image, depth component, or size, or reading of the type.
Specifies a Raster object with color data. In this mode, the ImageComponent ref-
erence must point to a valid ImageComponent object.
Specifies a Raster object with depth (Z-buffer) data. In this mode, the depth com-
ponent reference must point to a valid DepthComponent object.
Specifies a Raster object with both color and depth (Z-buffer) data. In this mode,
the image component reference must point to a valid ImageComponent object,
and the depth component reference must point to a valid DepthComponent
object.
Constructors
public Raster()
Constructs and initializes a new Raster object with the specified values.
Methods
These methods set and retrieve the position, in object coordinates, of this raster.
This position is transformed into device coordinates and is used as the upper-left
corner of the raster.
These methods set and retrieve the type of this Raster object. The type is one of
the following: RASTER_COLOR, RASTER_DEPTH, or RASTER_COLOR_DEPTH.
These methods set and retrieve the offset within the array of pixels at which to
start copying.
These methods set and retrieve the number of pixels to be copied from the pixel
array.
These methods set and retrieve the pixel array used to copy pixels to or from a
Canvas3D. This is used when the type is RASTER_COLOR or RASTER_
COLOR_DEPTH.
These methods set and retrieve the DepthComponent used to copy pixels to or
from a Canvas3D. This is used when the type is RASTER_DEPTH or RASTER_
COLOR_DEPTH.
Constructors
Creates a Font3D object from the specified Font and FontExtrusion objects,
using the default value for the tesselation tolerance. The default value is as fol-
lows:
Creates a Font3D object from the specified Font and FontExtrusion objects,
using the specified tessellation tolerance. The FontExtrusion object (see
Section 8.2.24, “FontExtrusion Object”) contains the extrusion path to use on the
2D Font glyphs. To ensure correct rendering the font must be created with the
default AffineTransform. Passing null for the FontExtrusion parameter results in
no extrusion being done. The tessellationTolerance parameter corresponds
to the flatness parameter in the java.awt.Shape.getPathIterator method.
Methods
This method returns the 3D bounding box of the specified glyph code.
This method returns the Java 2D font used to create this Font3D object.
This method retrieves the FontExtrusion object used to create this Font3D object
and copies it into the specified parameter. For information about the FontExtru-
sion object, see Section 8.2.24, “FontExtrusion Object.”
This method returns the tessellation tolerance with which this Font3D was cre-
ated.
Constructors
public FontExtrusion()
A null extrusion shape specifies that a straight line from 0.0 to 0.2 (straight
bevel) is used.
Creates a FontExtrusion object with the specified extrusion shape, using the
default tesselation tolerance. The extrusionShape parameter is used to construct
the edge contour of a Font3D object. Each shape begins with an implicit point at
0.0. Contour must be monotonic in x. An IllegalArgumentException is thrown if
multiple contours in extrusionShape, or contour is not monotonic, or least x-
value of a contour point is not 0.0f.
Creates a FontExtrusion object with the specified shape, using the specified tes-
sellation tolerance. The specified shape is used to construct the edge contour of a
Font3D object. Each shape begins with an implicit point at 0.0. Contour must be
monotonic in x. The tessellationTolerance parameter corresponds to the
flatness parameter in the java.awt.Shape.getPathIterator method.
Methods
These methods set and retrieve the 2D shape object associated with this FontEx-
trusion object. The Shape object describes the extrusion path used to create a 3D
glyph from a 2D glyph. The set method sets the FontExtrusion's shape parame-
ter. the get method gets the FontExtrusion's shape parameter.
This method returns the tessellation tolerance with which this FontExtrusion was
created.
of the Text3D NodeComponent object. Each Text3D object has a text position—
a point in 3D space where the text should be placed. The 3D text can be placed
around this position using different alignments and paths. An OrientedShape3D
node may also be used for drawing screen-aligned text (see Section 6.2.1,
“OrientedShape3D Node”).
If 3D texture mapping is not supported on a particular Canvas3D, 3D texture
mapping is ignored for that canvas.
Constants
The Text3D object defines the following flags.
These flags control reading and writing of the Font3D component information
for Font3D, the String object, the text position value, the text alignment value,
the text path value, the character spacing, and the bounding box.
Constructors
public Text3D()
Methods
These methods get and set the Font3D object associated with this Text3D object.
These methods get and set the character string associated with this Text3D
object.
These methods get and set the text position. The position parameter is used to
determine the initial placement of the string. The text position is used in conjunc-
tion with the alignment and path to determine how the glyphs are to be placed in
the scene. The default value is (0.0, 0.0, 0.0).
These methods set and get the text alignment policy for this Text3D NodeCom-
ponent object (see Figure 8-4). The alignment parameter is used to specify how
glyphs in the string are placed in relation to the position field. Valid values for
the alignment field are:
• ALIGN_CENTER: places the center of the string on the position point.
• ALIGN_FIRST: places the first character of the string on the position
point.
• ALIGN_LAST: places the last character of the string on the position point.
The default value of this field is ALIGN_FIRST.
These methods set and get the node’s path field. This field is used to specify how
succeeding glyphs in the string are placed in relation to the previous glyph (see
Figure 8-4). The path is relative to the local coordinate system of the Text3D
node. The default coordinate system (see Section 4.4, “Coordinate Systems”) is
right-handed with +Y being up, +X horizontal to the right, and +Z directed
toward the viewer. Valid values for this field are as follows:
• PATH_LEFT: places succeeding glyphs to the left (the –X direction) of the
current glyph.
• PATH_RIGHT: places succeeding glyphs to the right (the +X direction) of
the current glyph.
• PATH_UP: places succeeding glyphs above (the +Y direction) the current
glyph.
• PATH_DOWN: places succeeding glyphs below (the –Y direction) the cur-
rent glyph.
The default value of this field is PATH_RIGHT.
This method retrieves the 3D bounding box that encloses this Text3D object.
P .P D
U U O
. W
. D
D N
.O .
O W .
W N P
N U
= Text position point
These methods set and get the character spacing used to construct the Text3D
string. This spacing is in addition to the regular spacing between glyphs as
defined in the Font object. A value of 1.0 in this space is measured as the width
of the largest glyph in the 2D font. The default value is 0.0.
JAVA 3D introduces a new view model that takes Java’s vision of “write once,
run anywhere” and generalizes it to include display devices and six-degrees-of-
freedom input peripherals such as head trackers. This “write once, view every-
where” nature of the new view model means that an application or applet written
using the Java 3D view model can render images to a broad range of display
devices, including standard computer displays, multiple-projection display
rooms, and head-mounted displays, without modification of the scene graph. It
also means that the same application, once again without modification, can ren-
der stereoscopic views and can take advantage of the input from a head tracker to
control the rendered view.
Java 3D’s view model achieves this versatility by cleanly separating the virtual
and the physical world. This model distinguishes between how an application
positions, orients, and scales a ViewPlatform object (a viewpoint) within the vir-
tual world and how the Java 3D renderer constructs the final view from that
viewpoint’s position and orientation. The application controls the ViewPlatform’s
position and orientation; the renderer computes what view to render using this
position and orientation, a description of the end-user’s physical environment,
and the user’s position and orientation within the physical environment.
This chapter first explains why Java 3D chose a different view model and some
of the philosophy behind that choice. It next describes how that model operates
in the simple case of a standard computer screen without head tracking—the
most common case. Finally, it presents the relevant parts of the API from a
developer’s perspective. Appendix C, “View Model Details,” describes the
Java 3D view model from an advanced developer and Java 3D implementor’s
perspective.
include the size of the physical display, how the display is mounted (on the user’s
head or on a table), whether the computer knows the user’s head location in three
space, the head mount’s actual field of view, the display’s pixels per inch, and
other such parameters. For more information, see Appendix C, “View Model
Details.”
any) defines the local physical world coordinate system known to a particular
instance of Java 3D.
Virtual universe
Hi-res locale
BG
Physical Physical
Body Environment
Figure 9-1 View Object, Its Component Objects, and Their Interconnection
The view-related objects shown in Figure 9-1 and their roles are as follows. For
each of these objects, the portion of the API that relates to modifying the virtual
world and the portion of the API that is relevant to non-head-tracked standard
display configurations are derived in this chapter. The remainder of the details
are described in Appendix C, “View Model Details.”
• ViewPlatform: A leaf node that locates a view within a scene graph. The
ViewPlatform’s parents specify its location, orientation, and scale within
Virtual Universe
Hi-res Locale
BranchGroup BG
TransformGroup TG
Physical Physical
Body Environment
object about its center point. In that figure, the Behavior object modifies the
TransformGroup directly above the Shape3D node.
An alternative application scene graph, shown in Figure 9-3, leaves the central
object alone and moves the ViewPlatform around the world. If the shape node
contains a model of the earth, this application could generate a view similar to
that seen by astronauts as they orbit the earth.
Had we populated this world with more objects, this scene graph would allow
navigation through the world via the Behavior node.
Virtual Universe
Locale Object
BG BG BranchGroup Nodes
Methods
These methods set and retrieve the coexistence center in virtual world policy.
The default attach policy is View.NOMINAL_HEAD. A ViewPlatform’s view attach
policy determines how Java 3D places the virtual eyepoint within the
ViewPlatform. The policy can have one of the following values:
• View.NOMINAL_HEAD: Ensures that the end-user’s nominal eye posi-
tion in the physical world corresponds to the virtual eye’s nominal eye po-
sition in the virtual world (the ViewPlatform’s origin). In essence, this
policy tells Java 3D to position the virtual eyepoint relative to the
ViewPlatform origin in the same way as the physical eyepoint is positioned
relative to its nominal physical-world origin. Deviations in the physical
eye’s position and orientation from nominal in the physical world generate
corresponding deviations of the virtual eye’s position and orientation in the
virtual world.
• View.NOMINAL_FEET: Ensures that the end-user’s virtual feet always
touch the virtual ground. This policy tells Java 3D to compute the physical-
to-virtual-world correspondence in a way that enforces this constraint.
Java 3D does so by appropriately offsetting the physical eye’s position by
the end-user’s physical height. Java 3D uses the nominalEyeHeightFrom-
Ground parameter found in the PhysicalBody object (see Section 9.10,
“The PhysicalBody Object”) to perform this computation.
• View.NOMINAL_SCREEN: Allows an application to always have the vir-
tual eyepoint appear at some “viewable” distance from a point of interest.
This policy tells Java 3D to compute the physical-to-virtual-world corre-
spondence in a way that ensures that the renderer moves the nominal vir-
tual eyepoint away from the point of interest by the amount specified by
the nominalEyeOffsetFromNominalScreen parameter found in the Phys-
icalBody object (see Section 9.10, “The PhysicalBody Object”).
ViewPlatform attached to the current View object. The eye and projection matri-
ces are constructed from the View object and its associated component objects.
Virtual Universe
L Hi-Res Locale
BG
T1 Tv1
Physical Physical
Body Environment
In our scene graph, what we would normally consider the model transformation
would consist of the following three transformations: LT1T2. By multiplying
LT1T2 by a vertex in the shape object, we would transform that vertex into the
virtual universe’s coordinate system. What we would normally consider the view
platform transformation would be (LTv1)–1 or Tv1–1L–1. This presents a problem
since coordinates in the virtual universe are 256-bit fixed-point values, which
cannot be used to efficiently represent transformed points.
Fortunately, however, there is a solution to this problem. Composing the model
and view platform transformations gives us
Tv1–1L–1LT1T2 = Tv1–1IT1T2 = Tv1–1T1T2,
the matrix that takes vertices in an object’s local coordinate system and places
them in the ViewPlatform’s coordinate system. Note that the high-resolution
Locale transformations cancel each other out, which removes the need to actually
transform points into high-resolution VirtualUniverse coordinates. The general
formula of the matrix that transforms object coordinates to ViewPlatform coordi-
nates is Tvn–1…Tv2–1Tv1–1T1T2…Tm.
As was mentioned above, the View object contains the remainder of the view
information, specifically, the eye matrix, E, that takes points in the View-
Platform’s local coordinate system and translates them into the user’s eye coordi-
nate system, and the projection matrix, P, that projects objects in the eye’s
coordinate system into clipping coordinates. The final concatenation of matrices
for rendering our shape object “S” on the specified Canvas3D is PETv1–1T1T2. In
general this is PETvn–1…Tv2–1Tv1–1T1T2…Tm.
The details of how Java 3D constructs the matrices E and P in different end-user
configurations are described in Appendix C, “View Model Details.”
Constructors
The View object specifies the following constructor.
public View()
Methods
The View object specifies the following methods.
These methods set and retrieve the View’s PhysicalBody object. See
Section 9.10, “The PhysicalBody Object,” for more information on the Physical-
Body object.
These methods set and retrieve the View’s PhysicalEnvironment object. See
Section 9.11, “The PhysicalEnvironment Object,” for more information on the
PhysicalEnvironment object.
This method attaches a ViewPlatform leaf node to this View, replacing the exist-
ing ViewPlatform. If the ViewPlatform is part of a live scene graph, or is subse-
quently made live, the scene graph is rendered into all canvases in this View
object’s list of Canvas3D objects. To remove a ViewPlatform without attaching a
new one—causing the View to no longer be rendered—a null reference may be
passed to this method. In this case, the behavior is as if rendering were simulta-
neously stopped on all canvases attached to the View—the last frame that was
rendered in each remains visible until the View is again attached to a live
ViewPlatform object. See Section 6.11, “ViewPlatform Node,” for more informa-
tion on ViewPlatform objects.
These methods set, retrieve, add to, insert after, and remove a Canvas3D object
from this View. The index specifies the reference to the Canvas3D object within
the View object. See Section 9.9, “The Canvas3D Object” for more information
on Canvas3D objects.
Methods
These two methods set and retrieve the current projection policy for this view.
The projection policies are as follows:
• PARALLEL_PROJECTION: Specifies that Java 3D should compute a
parallel projection.
• PERSPECTIVE_PROJECTION: Specifies that Java 3D should compute a
perspective projection. This is the default setting.
These methods set and retrieve the local eye lighting flag, which indicates
whether the local eyepoint is used in lighting calculations for perspective projec-
tions. If this flag is set to true, the view vector is calculated per vertex based on
the direction from the actual eyepoint to the vertex. If this flag is set to false, a
single view vector is computed from the eyepoint to the center of the view frus-
tum. This is called infinite eye lighting. Local eye lighting is disabled by default,
and is ignored for parallel projections.
Constants
This variable specifies the policy for resizing and moving windows. This policy
is used in specifying windowResizePolicy and windowMovementPolicy. This
variable specifies that the specified action takes place only in the physical world.
This variable specifies that Java 3D applies the associated policy in the virtual
world.
Methods
This variable specifies how Java 3D modifies the view when a user resizes a win-
dow. A value of PHYSICAL_WORLD states that Java 3D will treat window resizing
operations as only happening in the physical world. This implies that rendered
objects continue to fill the same percentage of the newly sized window, using
more or less pixels to draw those objects, depending on whether the window
grew or shrank in size. A value of VIRTUAL_WORLD states that Java 3D will treat
window resizing operations as also happening in the virtual world whenever a
resizing occurs in the physical world. This implies that rendered objects remain
the same size (use the same number of pixels), but since the window becomes
larger or smaller, the user sees more or less of the virtual world. The default
value is PHYSICAL_WORLD.
This variable specifies what part of the virtual world Java 3D will draw as a func-
tion of the window location on the display screen. A value of PHYSICAL_WORLD
states that the window acts as if it moves only on the physical screen. As the user
moves the window on the screen, the window’s position on the screen changes
but Java 3D continues to draw exactly the same image within that window. A
value of VIRTUAL_WORLD states that the window acts as if it also moves within the
virtual world. As the user moves the window on the physical screen, the win-
dow’s position on the screen changes and the image that Java 3D draws changes
as well to match what would be visible in the virtual world from a window in
that new position. The default value is PHYSICAL_WORLD.
Methods
The front clip policy determines where Java 3D places the front clipping plane.
The value is one of the following: PHYSICAL_EYE, PHYSICAL_SCREEN, VIRTUAL_
EYE, or VIRTUAL_SCREEN. The default value is PHYSICAL_EYE.
The back clip policy determines where Java 3D places the back clipping plane.
The value is one of the following: PHYSICAL_EYE, PHYSICAL_SCREEN, VIRTUAL_
EYE, or VIRTUAL_SCREEN. The default value is PHYSICAL_EYE.
In the default non-head-tracked mode, this value specifies the view model’s hori-
zontal field of view in radians. This value is ignored when the view model is
operating in head-tracked mode, or when the Canvas3D’s window eyepoint pol-
icy is set to a value other than the default setting of RELATIVE_TO_FIELD_OF_
VIEW (see Section C.5.3, “Window Eyepoint Policy”).
This value specifies the distance away from the clip origin, specified by the front
clip policy variable, in the direction of gaze where objects stop disappearing.
Objects closer than the clip origin (eye or screen) plus the front clip distance are
not drawn. Measurements are done in the space (physical or virtual) that is spec-
ified by the associated front clip policy parameter.
This value specifies the distance away from the clip origin (specified by the back
clip policy variable) in the direction of gaze where objects begin disappearing.
Objects farther away from the clip origin (eye or screen) plus the back clip dis-
tance are not drawn. Measurements are done in the space (physical or virtual)
that is specified by the associated back clip policy parameter. The View object’s
back clip distance is ignored if the scene graph contains an active Clip leaf node
(see Section 6.5, “Clip Node”).
There are several considerations that need to be taken into account when choos-
ing values for the front and back clip distances.
• The front clip distance must be greater than 0.0 in physical eye coordi-
nates.
• The front clipping plane must be in front of the back clipping plane, that
is, the front clip distance must be less than the back clip distance in phys-
ical eye coordinates.
• The front and back clip distances, in physical eye coordinates, must be less
than the largest positive single-precision floating point value, Float.MAX_
VALUE. In practice, since these physical eye coordinate distances are in
meters, the values should be much less than that.
• The ratio of the back distance divided by the front distance, in physical eye
coordinates, affects Z-buffer precision. This ratio should be less than about
This method returns the time at which the most recent rendering frame started. It
is defined as the number of milliseconds since January 1, 1970 00:00:00 GMT.
Since multiple canvases might be attached to this View, the start of a frame is
defined as the point just prior to clearing any canvas attached to this View.
This method returns the duration, in milliseconds, of the most recently completed
rendering frame. The time taken to render all canvases attached to this View is
measured. This duration is computed as the difference between the start of the
most recently completed frame and the end of that frame. Since multiple can-
vases might be attached to this View, the start of a frame is defined as the point
just prior to clearing any canvas attached to this View, while the end of a frame
is defined as the point just after swapping the buffer for all canvases.
This method returns the frame number for this view. The frame number starts at
0 and is incremented prior to clearing all the canvases attached to this view.
This method copies the last k frame start time values into the user-specified array.
The most recent frame start time is copied to location 0 of the array, the next
most-recent frame start time is copied into location 1 of the array, and so on. If
times.length is smaller that maxFrameStartTimes, only the last times.length
These methods set and retrieve the minimum frame cycle time, in milliseconds,
for this view. The Java 3D renderer will ensure that the duration between each
frame is at least the specified number of milliseconds. The default value is 0.
The first method stops the behavior scheduler after all currently-scheduled
behaviors are executed. Any frame-based behaviors scheduled to wake up on the
next frame will be executed at least once before the behavior scheduler is
stopped. The method returns a pair if integers that specify the beginning and end-
ing time (in milliseconds since January 1, 1970 00:00:00 GMT) of the behavior
scheduler’s last pass. The second method starts the behavior scheduler running
after it has been stopped. The third method retrieves a flag that indicates whether
the behavior scheduler is currently running.
The first method stops traversing this view after the current state of the scene
graph is reflected on all canvases attached to this view. The renderers associated
with these canvases are also stopped. The second method starts traversing this
view and starts the renderers associated with all canvases attached to this view.
The third method returns a flag indicating whether the traverser is currently run-
ning on this view.
Note: The above six methods are heavy-weight methods intended for verification
and image capture (recording). They are not intended to be used for flow control.
This method requests that this View be scheduled for rendering as soon as possi-
ble. The repaint method may return before the frame has been rendered. If the
view is stopped, or if the view is continuously running (for example, due to a
free-running interpolator), this method will have no effect. Most applications will
not need to call this method, since any update to the scene graph or to viewing
parameters will automatically cause all affected views to be rendered.
These methods set and retrieve the scene antialiasing flag. Scene antialiasing is
either enabled or disabled for this view. If enabled, the entire scene will be
antialiased on each canvas in which scene antialiasing is available. Scene antial-
iasing is disabled by default.
Note: Line and point antialiasing are independent of scene antialiasing. If antial-
iasing is enabled for lines and points, the lines and points will be antialiased prior
to scene antialiasing.
The set method enables or disables automatic freezing of the depth buffer for
objects rendered during the transparent rendering pass (that is, objects rendered
using alpha blending) for this view. If enabled, depth buffer writes are disabled
during the transparent rendering pass regardless of the value of the depth-buffer-
write-enable flag in the RenderingAttributes object for a particular node. This
flag is enabled by default. The get method retrieves this flag.
Methods
These methods provide applications with information concerning the underlying
display hardware, such as the screen’s width and height in pixels or in meters.
These methods retrieve the width and height (in pixels) of this Screen3D. The
second method copies the width and height into the specified Dimension object.
These methods retrieve the screen’s (image plate’s) physical width and height in
meters.
These methods set the width and height (in pixels) of this off-screen Screen3D.
The default size for off-screen Screen3D objects is (0,0).
Note: The off-screen size, physical width, and physical height must be set prior to
rendering to the associated off-screen canvas. Failure to do so will result in an
exception.
Constructors
The Canvas3D object specifies the following constructors.
This constructs and initializes a new Canvas3D object that Java 3D can render
into. The following Canvas3D parameters are initialized to default values as
shown:
Parameter Default Value
left manual eye in image plate (0.142, 0.135, 0.4572)
right manual eye in image plate (0.208, 0.135, 0.4572)
stereo enable true
double buffer enable true
monoscopic view policy View.CYCLOPEAN_EYE_VIEW
off-screen mode false
off-screen buffer null
off-screen location (0,0)
This constructs and initializes a new Canvas3D object that Java 3D can render
into.
Java 3D can render into this Canvas3D object. If the graphicsConfiguration
argument is null, a GraphicsConfiguration object will be constructed using the
default GraphicsConfigTemplate3D (see Section 9.9.4, “GraphicsConfigTem-
plate3D Object.”
For more information on the GraphicsConfiguration object, see the Java 2D spec-
ification, which is part of the AWT in JDK 1.2.
VIEW MODEL Off-screen Rendering 9.9.2
These methods, inherited from the parent Canvas class, retrieve the Canvas3D’s
screen position and size in pixels.
This method retrieves the state of the renderer for this Canvas3D object.
The first method sets the off-screen buffer for this Canvas3D. The specified
image is written into by the Java 3D renderer. The size of the specified Image-
Component determines the size, in pixels, of this Canvas3D – the size inherited
from Component is ignored. The second method retrieves the off-screen buffer
for this Canvas3D.
Note: The size, physical width, and physical height of the associated Screen3D
must be set explicitly prior to rendering. Failure to do so will result in an excep-
tion.
This method schedules the rendering of a frame into this Canvas3D’s off-screen
buffer. The rendering is done from the point of view of the View object to which
this Canvas3D has been added. No rendering is performed if this Canvas3D
object has not been added to an active View. This method does not wait for the
rendering to actually happen. An application that wishes to know when the ren-
dering is complete must either subclass Canvas3D and override the postSwap
method, or call waitForOffScreenRendering. An IllegalStateException is
thrown if this Canvas3D is not in off-screen mode, or if either the width or the
height of the associated Screen3D’s size is ≤ 0, or if the associated Screen3D’s
physical width or height is ≤ 0.
This method waits for this Canvas3D’s off-screen rendering to be done. This
method will wait until the postSwap method of this off-screen Canvas3D has
completed. If this Canvas3D has not been added to an active view or if the ren-
derer is stopped for this Canvas3D, this method will return immediately. This
method must not be called from a render callback method of an off-screen
Canvas3D.
These methods set the location of this off-screen Canvas3D. The location is the
upper-left corner of the Canvas3D relative to the upper-left corner of the corre-
sponding off-screen Screen3D. The function of these methods is similar to that
of Component.setLocation for on-screen Canvas3D objects. The default loca-
tion is (0,0).
These methods retrieve the location of this off-screen Canvas3D. The location is
the upper-left corner of the Canvas3D relative to the upper-left corner of the cor-
responding off-screen Screen3D. The function of these methods is similar to that
of Component.getLocation for on-screen Canvas3D objects. The second
method stores the location in the specified Point object. This method is useful if
the caller wants to avoid allocating a new Point object on the heap.
These methods set or retrieve the flag indicating whether this Canvas3D has ste-
reo enabled. If enabled, Java 3D generates left and right eye images. If the
Canvas3D’s StereoAvailable flag is false, Java 3D displays only the left eye’s
view even if an application sets StereoEnable to true. This parameter allows
applications to enable or disable stereo on a canvas-by-canvas basis.
This method specifies whether the underlying hardware supports double buffer-
ing on this canvas. This is equivalent to:
((Boolean)queryProperties().get("doubleBufferAvailable")).
booleanValue()
These methods set or retrieve the flag indicating whether this Canvas3D has dou-
ble buffering enabled. If disabled, all drawing is to the front buffer and no buffer
swap will be done between frames. It should be stressed that running Java 3D
with double buffering disabled is not recommended.
This method returns a read-only Map object containing key-value pairs that
define various properties for this Canvas3D. All of the keys are String objects.
The values are key-specific, but most will be Boolean, Integer, Double, or String
objects. The currently-defined keys are:
Constructors
public GraphicsConfigTemplate3D()
Methods
These methods set and retrieve the double-buffering attribute. The valid values
are: REQUIRED, PREFERRED, and UNNECESSARY.
These methods set and retrieve the stereo attribute. The valid values are:
REQUIRED, PREFERRED, and UNNECESSARY.
These methods set and retrieve the scene antialiasing attribute. The valid values
are: REQUIRED, PREFERRED, and UNNECESSARY.
These methods set and retrieve the depth buffer size requirement.
These methods set and retrieve the number of red, green, and blue bits requested
by this template.
public java.awt.GraphicsConfiguration
getBestConfiguration(java.awt.GraphicsConfiguration[] gc)
This method returns the “best” configuration possible that passes the criteria
defined in the GraphicsConfigTemplate3D.
public boolean
isGraphicsConfigSupported(java.awt.GraphicsConfiguration
gc)
This method returns a boolean indicating whether or not the given GraphicsCon-
figuration can be used to create a drawing surface that can be rendered to. This
method returns true if this GraphicsConfiguration object can be used to create
surfaces that can be rendered to, false if the GraphicsConfiguration cannot be
used to create a drawing surface usable by this API.
Constructors
public PhysicalBody()
Constructors
public PhysicalEnvironment()
B EHAVIOR nodes provide the means for animating objects, processing key-
board and mouse inputs, reacting to movement, and enabling and processing pick
events. Behavior nodes contain Java code and state variables. A Behavior node’s
Java code can interact with Java objects, change node values within a Java 3D
scene graph, change the behavior’s internal state—in general, perform any com-
putation it wishes.
Simple behaviors can add surprisingly interesting effects to a scene graph. For
example, one can animate a rigid object by using a Behavior node to repetitively
modify the TransformGroup node that points to the object one wishes to animate.
Alternatively, a Behavior node can track the current position of a mouse and
modify portions of the scene graph in response.
The scheduling region defines a spatial volume that serves to enable the sched-
uling of Behavior nodes. A Behavior node is active (can receive stimuli) when-
ever a ViewPlatform’s activation volume intersects a Behavior object’s
scheduling region. Only active behaviors can receive stimuli.
The initialize method allows a Behavior object to initialize its internal state
and specify its initial wakeup condition(s). Java 3D invokes a behavior’s initial-
ize code when the behavior’s containing BranchGroup node is added to the vir-
tual universe. Java 3D does not invoke the initialize method in a new thread.
Version 1.2, March 2000 267
10.1.1 Code Structure BEHAVIORS AND INTERPOLATORS
Thus, for Java 3D to regain control, the initialize method must not execute an
infinite loop: It must return. Furthermore, a wakeup condition must be set or else
the behavior’s processStimulus method is never executed.
The processStimulus method receives and processes a behavior’s ongoing mes-
sages. The Java 3D behavior scheduler invokes a Behavior node’s processStim-
ulus method when a ViewPlatform’s activation volume intersects a Behavior
object’s scheduling region and all of that behavior’s wakeup criteria are satisfied.
The processStimulus method performs its computations and actions (possibly
including the registration of state change information that could cause Java 3D to
wake other Behavior objects), establishes its next wakeup condition, and finally
exits.
10.3 Scheduling
As a virtual universe grows large, Java 3D must carefully husband its resources
to ensure adequate performance. In a 10,000-object virtual universe with 400 or
so Behavior nodes, a naive implementation of Java 3D could easily end up con-
suming the majority of its compute cycles in executing the behaviors associated
with the 400 Behavior objects before it draws a frame. In such a situation, the
frame rate could easily drop to unacceptable levels.
Behavior objects are usually associated with geometric objects in the virtual uni-
verse. In our example of 400 Behavior objects scattered throughout a 10,000-
object virtual universe, only a few of these associated geometric objects would
be visible at a given time. A sizable fraction of the Behavior nodes—those asso-
ciated with nonvisible objects—need not be executed. Only those relatively few
Behavior objects that are associated with visible objects must be executed.
Java 3D mitigates the problem of a large number of Behavior nodes in a high-
population virtual universe through execution culling—choosing only to invoke
those behaviors that have high relevance.
Java 3D requires each behavior to have a scheduling region and to post a wakeup
condition. Together a behavior’s scheduling region and wakeup condition pro-
vide Java 3D’s behavior scheduler with sufficient domain knowledge to selec-
tively prune behavior invocations and only invoke those behaviors that absolutely
need to be executed.
Java 3D’s behavior scheduler executes those Behavior objects that have been
scheduled by calling the behavior’s processStimulus method.
Constructor
The Behavior leaf node class defines the following constructor.
public Behavior()
Methods
The Behavior leaf node class defines the following methods.
This method, invoked by Java 3D’s behavior scheduler, is used to initialize the
behavior’s state variables and to establishes its WakeupConditions. Classes that
extend Behavior must provide their own initialize method. Applications
should not call this method.
This method processes stimuli destined for this behavior. The behavior scheduler
invokes this method if its WakeupCondition is satisfied. Classes that extend
These two methods access or modify the Behavior node’s scheduling bounds.
This bounds is used as the scheduling region when the scheduling bounding leaf
is set to null. A behavior is scheduled for activation when its scheduling region
intersects the ViewPlatform’s activation volume (if its wakeup criteria have been
satisfied). The getSchedulingBounds method returns a copy of the associated
bounds.
These two methods access or modify the Behavior node’s scheduling bounding
leaf. When set to a value other than null, this bounding leaf overrides the sched-
uling bounds object and is used as the scheduling region.
This method defines this behavior’s wakeup criteria. This method may only be
called from a Behavior object’s initialize or processStimulus methods to
(re)arm the next wakeup. It should be the last thing done by those methods.
This method, when invoked by a behavior, informs the Java 3D scheduler of the
identified event. The scheduler will schedule other Behavior objects that have
registered interest in this posting.
This method returns the primary view associated with this behavior. This method
is useful with certain types of behaviors, such as Billboard and LOD, that rely on
per-View information and with behaviors in general in regards to scheduling (the
distance from the view platform determines the active behaviors). The “primary”
view is defined to be the first View attached to a live ViewPlatform, if there is
more than one active View. So, for instance, Billboard behaviors would be ori-
ented toward this primary view, in the case of multiple active views into the same
scene graph.
Methods
The Java 3D API provides two methods for constructing WakeupCondition enu-
merations.
These two methods create enumerators that sequentially access this WakeupCon-
dition’s wakeup criteria. The first method creates an enumerator that sequentially
presents all wakeup criteria that were used to construct this WakeupCondition.
The second method creates an enumerator that sequentially presents only those
wakeup criteria that have been satisfied.
Methods
10.5.3.1 WakeupOnAWTEvent
This WakeupCriterion object specifies that Java 3D should awaken a behavior
when the specified AWT event occurs.
Constructors
Methods
This method returns the array of consecutive AWT events that triggered this
WakeupCriterion to awaken the Behavior object. The Behavior object can
retrieve the AWTEvent array and process it in any way it wishes.
10.5.3.2 WakeupOnActivation
The WakeupOnActivation object specifies a wakeup the first time the
ViewPlatform’s activation region intersects with this object’s scheduling region.
This gives the behavior an explicit means of executing code when it is activated.
Constructors
public WakeupOnActivation()
10.5.3.3 WakeupOnBehaviorPost
This WakeupCriterion object specifies that Java 3D should awaken this behavior
when the specified behavior posts the specified ID.
Constructors
awaken on any post from the specified behavior. Specifying a null behavior
implies that this behavior should awaken whenever any behavior posts the speci-
fied postId.
Methods
This method returns the postid that caused the behavior to wake up. If the postid
used to construct this wakeup criterion was not zero, the triggering postid will
always be equal to the postid used in the constructor.
This method returns the behavior that triggered this wakeup. If the arming behav-
ior used to construct this object was not null, the triggering behavior will be the
same as the arming behavior.
10.5.3.4 WakeupOnDeactivation
The WakeupOnDeactivation object specifies a wakeup on the first detection of a
ViewPlatform’s activation region no longer intersecting with this object’s sched-
uling region. This gives the behavior an explicit means of executing code when it
is deactivated.
Constructors
public WakeupOnDeactivation()
10.5.3.5 WakeupOnElapsedFrames
This WakeupCriterion object specifies that Java 3D should awaken this behavior
after it has rendered the specified number of frames. A value of 0 implies that
Java 3D will awaken this behavior at the next frame. The wakeup criterion can
either be passive or non-passive. If a behavior uses a non-passive WakeupOnE-
lapsedFrames, the rendering system will run continuously.
Constructors
Methods
This method returns the frame count that was specified when constructing this
object.
This method retrieves the state of the passive flag that was used when construct-
ing this object.
10.5.3.6 WakeupOnElapsedTime
This WakeupCriterion object specifies that Java 3D should awaken this behavior
after an elapsed number of milliseconds.
Constructors
Note: The Java 3D scheduler will schedule the object after the specified number
of milliseconds have elapsed, not before. However, the elapsed time may actually
be slightly greater than the time specified.
Methods
10.5.3.7 WakeupOnSensorEntry
This WakeupCriterion object specifies that Java 3D should awaken this behavior
when any sensor enters the specified region.
Note: There can be situations in which a sensor may enter and then exit an
armed region so rapidly that neither the Entry nor Exit condition is engaged.
Constructors
Methods
This method returns the Bounds object used in creating this WakeupCriterion.
10.5.3.8 WakeupOnSensorExit
This WakeupCriterion object specifies that Java 3D should awaken this behavior
when any sensor, already marked as within the region, is no longer in that region.
Note: This semantic guarantees that an Exit condition is engaged if its corre-
sponding Entry condition was engaged.
Constructors
Methods
This method returns the Bounds object used in creating this WakeupCriterion.
This method retrieves the Sensor object that caused the wakeup.
10.5.3.9 WakeupOnCollisionEntry
This WakeupCriterion object specifies that Java 3D should awaken the Wake-
upOnCollisionEntry behavior when the specified object collides with any other
object in the scene graph.
Constants
Constructors
other object in the scene graph. The speedHint flag is either USE_GEOMETRY or
USE_BOUNDS.
Methods
These methods return the “collideable” path or bounds object used in specifying
the collision detection.
These methods return the path or bounds object that caused the collision.
10.5.3.10 WakeupOnCollisionExit
This WakeupCriterion object specifies that Java 3D should awaken the Wake-
upOnCollisionExit behavior when the specified object no longer collides with
any other object in the scene graph.
Constants
Constructors
Methods
These methods return the “collideable” path or bounds object used in specifying
the collision detection.
These methods return the path or bounds object that caused the collision.
10.5.3.11 WakeupOnCollisionMovement
This WakeupCriterion object specifies that Java 3D should awaken the Wake-
upOnCollisionMovement behavior when the specified object moves while in a
state of collision with any other object in the scene graph.
Constants
Constructors
Methods
These methods return the “collideable” path or bounds object used in specifying
the collision detection.
These methods return the path or bounds object that caused the collision.
10.5.3.12 WakeupOnViewPlatformEntry
This WakeupCriterion object specifies that Java 3D should awaken the Wake-
upOnViewPlatformEntry behavior when any ViewPlatform enters the specified
region.
Note: There can be situations in which a ViewPlatform may enter and then exit
an armed region so rapidly that neither the Entry nor Exit condition is engaged.
Constructors
Methods
This method returns the Bounds object used in creating this WakeupCriterion.
10.5.3.13 WakeupOnViewPlatformExit
This WakeupCriterion object specifies that Java 3D should awaken the Wake-
upOnViewPlatformExit behavior when any ViewPlatform, already marked as
within the region, is no longer in that region.
Note: This semantic guarantees that an Exit condition gets engaged if its corre-
sponding Entry condition was engaged.
Constructors
Methods
This method returns the Bounds object used in creating this WakeupCriterion.
10.5.3.14 WakeupOnTransformChange
The WakeupOnTransformChange object specifies a wakeup when the transform
within a specified TransformGroup changes.
Constructors
Methods
This method returns the TransformGroup node used in creating this WakeupCri-
terion.
10.5.3.15 WakeupAnd
The WakeupAnd class specifies any number of wakeup conditions ANDed
together. This WakeupCondition object specifies that Java 3D should awaken this
Behavior when all of the WakeupCondition’s constituent wakeup criteria become
valid.
Constructors
This constructor creates a WakeupAnd object that informs the Java 3D sched-
uler to wake up this Behavior object when all the conditions specified in the
array of WakeupCriterion objects have become valid.
10.5.3.16 WakeupOr
The WakeupOr class specifies any number of wakeup conditions ORed together.
This WakeupCondition object specifies that Java 3D should awaken this Behav-
ior when any of the WakeupCondition’s constituent wakeup criteria becomes
valid.
Constructors
This constructor creates a WakeupOr object that informs the Java 3D scheduler
to wake up this Behavior object when any condition specified in the array of
WakeupCriterion objects becomes valid.
10.5.3.17 WakeupAndOfOrs
The WakeupAndOfOrs class specifies any number of OR wakeup conditions
ANDed together. This WakeupCondition object specifies that Java 3D should
awaken this Behavior when all of the WakeupCondition’s constituent WakeupOr
conditions become valid.
Constructors
10.5.3.18 WakeupOrOfAnds
The WakeupOrOfAnds class specifies any number of AND wakeup conditions
ORed together. This WakeupCondition object specifies that Java 3D should
awaken this Behavior when any of the WakeupCondition’s constituent Wakeu-
pAnd conditions becomes valid.
Constructors
Phase α α α α
delay increasing at 1 decreasing at 0
α
Trigger
Developers can use the loop count in conjunction with the mode flags to generate
various kinds of actions. Specifying a loop count of 1 and enabling the mode flag
for only the alpha-increasing and alpha-at-1 portion of the waveform, we would
get the waveform shown in Figure 10-2.
Time
Figure 10-2 An Interpolator Set to a Loop Count of 1 with Mode Flags Set to Enable Only
the Alpha-Increasing and Alpha-at-1 Portion of the Waveform
In Figure 10-2, the alpha value is 0 before the combination of trigger time plus
the phase delay duration. The alpha value changes from 0 to 1 over a specified
interval of time, and thereafter the alpha value remains 1 (subject to the repro-
gramming of the interpolator’s parameters). A possible use of a single alpha-
1
0
Time
Figure 10-3 An Interpolator Set to a Loop Count of 1 with Mode Flags Set to Enable Only
the Alpha-Decreasing and Alpha-at-0 Portion of the Waveform
1
0 0
Time
Figure 10-4 An Interpolator Set to a Loop Count of 1 with Mode Flags Set to Enable All Por-
tions of the Waveform
In Figure 10-4, the alpha value is 0 before the combination of trigger time plus
the phase delay duration. The alpha value changes from 0 to 1 over a specified
period of time, remains at 1 for another specified period of time, then changes
from 1 to 0 over a third specified period of time, and thereafter the alpha value
remains 0 (subject to the reprogramming of the interpolator’s parameters). A
possible use of an alpha-increasing followed by an alpha-decreasing value might
1 1 1 1 1 1 1
0 0 0 0 0 0 0
Time
Figure 10-5 An Interpolator Set to Loop Infinitely and Mode Flags Set to Enable Only the
Alpha-Increasing and Alpha-at-1 Portion of the Waveform
In Figure 10-5, alpha goes from 0 to 1 over a fixed duration of time, stays at 1 for
another fixed duration of time, and then repeats.
Similarly, Figure 10-6 shows a looping interpolator with mode flags set to enable
only the alpha-decreasing and alpha-at-0 portion of the waveform.
1 1 1 1 1 1 1
0 0 0 0 0 0 0
Time
Figure 10-6 An Interpolator Set to Loop Infinitely and Mode Flags Set to Enable Only the
Alpha-Decreasing and Alpha-at-0 Portion of the Waveform
Finally, Figure 10-7 shows a looping interpolator with both the increasing and
decreasing portions of the waveform enabled.
In all three cases shown by Figure 10-5, Figure 10-6, and Figure 10-7, we can
compute the exact value of alpha at any point in time.
1 1 1 1
0 0 0 0 0
Time
Figure 10-7 An Interpolator Set to Loop Infinitely and Mode Flags Set to Enable All Por-
tions of the Waveform
α Acceleration
α Velocity
1 1 1
α Value
0 0 0
Constants
These flags specify that this alpha’s mode is to use the increasing or decreasing
component of the alpha, respectively.
Constructors
public Alpha()
Constructs a new Alpha object using the specified parameters to define the alpha
phases for the object.
Methods
These methods return the alpha value (between 0.0 and 1.0 inclusive) based on
the time-to-alpha parameters established for this interpolator. The first method
returns the alpha for the current time. The second method returns the alpha for an
arbitrary given time. If the alpha mapping has not started, the starting alpha value
is returned. If the alpha mapping has completed, the ending alpha value is
returned.
These methods set and retrieve this alpha’s start time, the base for all relative
time specifications. The default value of startTime is the system start time,
defined to be a global time base representing the start of Java 3D execution.
These methods set and retrieve this alpha’s mode, which defines which of the
alpha regions are active. The mode is one of the following values: INCREASING_
ENABLE, DECREASING_ENABLE, or both (when both of these modes are ORed
together).
If the mode is INCREASING_ENABLE, the increasingAlphaDuration, increas-
ingAlphaRampDuration, and alphaAtOneDuration are active. If the mode is
DECREASING_ENABLE, the decreasingAlphaDuration, decreasingAlphaRamp-
Duration, and alphaAtZeroDuration are active. If the mode is both constants
ORed, all regions are active. Active regions are all preceded by the phase delay
region.
These methods set and retrieve this alpha’s phase delay duration.
This method returns true if this Alpha object is past its activity window, that is,
if it has finished all its looping activity. This method returns false if this Alpha
object is still active.
Constants
This is the default WakeupCondition for all interpolators. The wakeupOn method
of Behavior, which takes a WakeupCondition as the method parameter, will need
to be called at the end of the processStimulus method of any class that sub-
classes Interpolator. This is done with the following method call:
wakeupOn(defaultWakeupCriterion);
Constructors
The Interpolator behavior class has the following constructors.
public Interpolator()
Constructs and initializes a new Interpolator with the specified alpha value. This
constructor provides the common initialization code for all specializations of
Interpolator.
Methods
These methods set and retrieve this interpolator’s Alpha object. Setting it to null
causes the Interpolator to stop running.
These methods set and retrieve this Interpolator’s enabled state—the default is
enabled.
Constructors
The PositionInterpolator object specifies the following constructors.
Constructs and initializes a new PositionInterpolator that varies the target Trans-
formGroup node’s translational component (startPosition and endPosition).
The axisOfTranslation parameter specifies the transform that defines the local
coordinate system in which this interpolator operates. The translation is done
along the X-axis of this local coordinate system.
Methods
The PositionInterpolator object specifies the following methods.
These two methods set and get the Interpolator’s start position.
These two methods set and get the Interpolator’s end position.
These two methods set and get the Interpolator’s target TransformGroup node.
These two methods set and get the Interpolator’s axis of translation.
This method is invoked by the behavior scheduler every frame. It maps the alpha
value that corresponds to the current time into a translation value, computes a
transform based on this value, and updates the specified TransformGroup node
with this new transform.
interpolated angle is used to generate a rotation transform about the local Y-axis
of this interpolator.
Constructors
Methods
These two methods set and get the interpolator’s minimum rotation angle, in
radians.
These two methods set and get the interpolator’s maximum rotation angle, in
radians.
These two methods set and get the interpolator’s axis of rotation.
These two methods set and get the interpolator’s target TransformGroup node.
This method is invoked by the behavior scheduler every frame. It maps the alpha
value that corresponds to the current time into a rotation angle, computes a trans-
form based on this angle, and updates the specified TransformGroup node with
this new transform.
Constructors
Constructs a new ColorInterpolator object that varies the diffuse color of the tar-
get material between two color values (startColor and endColor).
Methods
These two methods set and get the interpolator’s start color.
These two methods set and get the interpolator’s end color.
These two methods set and get the interpolator’s target Material component
object.
This method is invoked by the behavior scheduler every frame. It maps the alpha
value that corresponds to the current time into a color value and updates the dif-
fuse color of the target Material object with this new color value.
Constructors
Constructs a trivial scale interpolator that varies its target TransformGroup node
between the two scale values, using the specified alpha, an identity matrix, a
minimum scale of 0.1, and a maximum scale of 1.0.
Methods
These two methods set and get the interpolator’s minimum scale.
These two methods set and get the interpolator’s maximum scale.
These two methods set and get the interpolator’s axis of scale.
These two methods set and get the interpolator’s target TransformGroup node.
This method is invoked by the behavior scheduler every frame. It maps the alpha
value that corresponds to the current time into a scale value, computes a trans-
form based on this value, and updates the specified TransformGroup node with
this new transform.
Constructors
Methods
These two methods set and get the interpolator’s first child index.
These two methods set and get the interpolator’s last child index.
These two methods set and get the interpolator’s target Switch node.
This method is invoked by the behavior scheduler every frame. It maps the alpha
value that corresponds to the current time into a child index value and updates
the specified Switch node with this new child index value.
10.6.10TransparencyInterpolator Object
The TransparencyInterpolator class extends Interpolator. It modifies the transpar-
ency of its target TransparencyAttributes object by linearly interpolating between
a pair of specified transparency values (using the value generated by the specified
Alpha object).
Constructors
Methods
These two methods set and get the interpolator’s minimum transparency.
These two methods set and get the interpolator’s maximum transparency.
These two methods set and get the interpolator’s target TransparencyAttributes
component object.
This method is invoked by the behavior scheduler every frame. It maps the alpha
value that corresponds to the current time into a transparency value and updates
the specified TransparencyAttributes object with this new transparency value.
10.6.11PathInterpolator Object
The PathInterpolator class extends Interpolator. This class defines the base class
for all path interpolators. Subclasses have access to the computePathInterpola-
tion method, which computes the currentInterpolationValue given the cur-
rent time and alpha. The method also computes the currentKnotIndex, which is
based on the currentInterpolationValue.
The currentInterpolationValue is calculated by linearly interpolating among
a series of predefined knot and orientation, pairs (using the value generated by
the specified Alpha object). The last knot must have a value of 1.0. An interme-
diate knot with index k must have a value strictly greater than any knot with
index less than k.
Constants
This value is the ratio between knot values indicated by the currentKnotIndex
variable. So if a subclass wanted to interpolate between knot values, it would use
the currentKnotIndex to get the bounding knots for the “real” value, then use
the currentInterpolationValue to interpolate between the knots. To calculate
this variable, a subclass needs to call the computePathInterpolation method
from the subclass’s processStimulus method. Then this variable will hold a
valid value that can be used in further calculations by the subclass.
protected int currentKnotIndex
This value is the index of the current base knot value, as determined by the alpha
function. A subclass wishing to interpolate between bounding knots would use
this index and the one following it, and would use the currentInterpolation-
Value variable as the ratio between these indices. To calculate this variable, a
subclass needs to call the computePathInterpolation method from the sub-
class’s processStimulus method. Then this variable will hold a valid value that
can be used in further calculations by the subclass.
Constructors
Methods
This method retrieves the length of the knot and position arrays (which are the
same length).
These methods set and retrieve the knot at the specified index for this interpola-
tor.
These methods set and retrieve an array of knot values. The set method replaces
the existing array with the specified array. The get method copies the array of
knots from this interpolator into the specified array. The array must be large
enough to hold all of the knots.
This method computes the base knot index and interpolation value given the cur-
rent value of alpha and the knots[] array. If the index is 0 and there should be no
interpolation, both the index variable and the interpolation variable are set to 0.
Otherwise, currentKnotIndex is set to the lower index of the two bounding
knot points and the currentInterpolationValue variable is set to the ratio of
the alpha value between these two bounding knot points.
10.6.12PositionPathInterpolator Object
The PositionPathInterpolator class extends PathInterpolator. It modifies the trans-
lational component of its target TransformGroup by linearly interpolating among
a series of predefined knot/position pairs (using the value generated by the spec-
Constructors
Methods
These two methods set and get the interpolator’s indexed position.
This method copies the array of position values from this interpolator into the
specified array. The array must be large enough to hold all of the positions. The
individual array elements must be allocated by the caller.
These two methods set and get the interpolator’s axis of translation.
These two methods set and get the interpolator’s target TransformGroup object.
This method is invoked by the behavior scheduler every frame. It maps the alpha
value that corresponds to the current time into a translation value, computes a
transform based on this value, and updates the specified TransformGroup node
with this new transform.
10.6.13RotPosPathInterpolator Object
The RotPosPathInterpolator class extends PathInterpolator. It modifies the rota-
tional and translational components of its target TransformGroup by linearly
interpolating among a series of predefined knot/position and knot/orientation
pairs (using the value generated by the specified Alpha object). The interpolated
position and orientation are used to generate a transform in the local coordinate
system of this interpolator.
The first knot must have a value of 0.0. The last knot must have a value of 1.0.
An intermediate knot with index k must have a value strictly greater than any
knot with index less than k.
Constructors
Methods
These two methods set and get the interpolator’s indexed quaternion value.
This method copies the array of quaternion values from this interpolator into the
specified array. The array must be large enough to hold all of the quats. The indi-
vidual array elements must be allocated by the caller.
These two methods set and get the interpolator’s indexed position.
This method copies the array of position values from this interpolator into the
specified array. The array must be large enough to hold all of the positions. The
individual array elements must be allocated by the caller.
These two methods set and get the interpolator’s axis of rotation and translation.
These two methods set and get the interpolator’s target TransformGroup object.
This method replaces the existing arrays of knot values, quaternion values, and
position values with the specified arrays. The arrays of knots, quats, and posi-
tions are copied into this interpolator object.
This method is invoked by the behavior scheduler every frame. It maps the alpha
value that corresponds to the current time into translation and rotation values,
computes a transform based on these values, and updates the specified Trans-
formGroup node with this new transform.
10.6.14RotPosScalePathInterpolator Object
The RotPosScalePathInterpolator class extends PathInterpolator. It varies the
rotational, translational, and scale components of its target TransformGroup by
linearly interpolating among a series of predefined knot/position, knot/orienta-
tion, and knot/scale pairs (using the value generated by the specified Alpha
object). The interpolated position, orientation, and scale are used to generate a
transform in the local coordinate system of this interpolator.
The first knot must have a value of 0.0. The last knot must have a value of 1.0.
An intermediate knot with index k must have a value strictly greater than any
knot with index less than k.
Constructors
Methods
These two methods set and get the interpolator’s indexed scale value.
This method copies the array of scale values from this interpolator into the spec-
ified array. The array must be large enough to hold all of the scales.
These two methods set and get the interpolator’s indexed quaternion value.
These two methods set and get the interpolator’s indexed position.
This method copies the array of position values from this interpolator into the
specified array. The array must be large enough to hold all of the positions. The
individual array elements must be allocated by the caller.
These two methods set and get the interpolator’s axis of rotation, translation, and
scale.
These two methods set and get the interpolator’s target TransformGroup object.
This method replaces the existing arrays of knot values, quaternion values, posi-
tion values, and scale values with the specified arrays. The arrays of knots, quats,
positions, and scales are copied into this interpolator object.
This method is invoked by the behavior scheduler every frame. It maps the alpha
value that corresponds to the current time into translation, rotation, and scale val-
ues, computes a transform based on these values, and updates the specified
TransformGroup node with this new transform.
10.6.15RotationPathInterpolator Object
The RotationPathInterpolator class extends the PathInterpolator class. It varies
the rotational component of its target TransformGroup by linearly interpolating
among a series of predefined knot/orientation pairs (using the value generated by
308 The Java 3D API Specification
BEHAVIORS AND INTERPOLATORS RotationPathInterpolator Object 10.6.15
Constructors
Methods
These two methods set and get the interpolator’s indexed quaternion value.
These two methods set and get the interpolator’s axis of rotation.
These two methods set and get the interpolator’s target TransformGroup object.
This method replaces the existing arrays of knot values and quaternion values
with the specified arrays. The arrays of knots and quats are copied into this inter-
polator object.
This method is invoked by the behavior scheduler every frame. It maps the alpha
value that corresponds to the current time into a rotation angle, computes a trans-
form based on this angle, and updates the specified TransformGroup node with
this new transform.
Constructors
public LOD()
Methods
The LOD node class defines the following methods.
The addSwitch method appends the specified Switch node to this LOD’s list of
switches. The setSwitch method replaces the specified Switch node with the
Switch node provided. The insertSwitch method inserts the specified Switch
node at the specified index. The removeSwitch method removes the Switch node
at the specified index. The getSwitch method returns the Switch node specified
by the index. The numSwitches method returns a count of this LOD’s switches.
Constructors
public DistanceLOD()
This constructor creates a DistanceLOD object with a single distance value set to
0.0 and is, therefore, not very useful.
public DistanceLOD(float distances[])
public DistanceLOD(float distances[], Point3f position)
Methods
These methods set and retrieve the position parameter for this DistanceLOD
node. This position is specified in the local coordinates of this node, and is the
position from which the distance to the viewer is computed.
The numDistances method returns a count of the number of LOD distance cutoff
parameters. The getDistance method returns a particular LOD cutoff distance.
The setDistance method sets a particular LOD cutoff distance.
Constants
The Billboard class adds the following new constants.
Specifies that rotation should be about the specified point and that the children’s
Y-axis should match the ViewPlatform’s Y-axis.
Constructors
The Billboard class specifies the following constructors.
public Billboard()
Constructs a Billboard node with default parameters that operates on the speci-
fied TransformGroup node. The default alignment mode is ROTATE_ABOUT_AXIS
rotation with the axis pointing along the Y axis.
The first constructor constructs a Billboard behavior node with default parame-
ters that operates on the specified target TransformGroup node. The default
alignment mode is ROTATE_ABOUT_AXIS, with the axis along the Y-axis. The next
two constructors construct a Billboard behavior node with the specified axis and
mode that operates on the specified TransformGroup node. The axis parameter
specifies the ray about which the billboard rotates. The point parameter specifies
the position about which the billboard rotates. The mode parameter is the align-
ment mode and is either ROTATE_ABOUT_AXIS or ROTATE_ABOUT_POINT.
Methods
The Billboard class defines the following methods.
These methods set or retrieve the target TransformGroup node for this Billboard
object.
The first two methods set the rotation point. The third method gets the rotation
point and sets the parameter to this value.
JAVA 3D provides access to keyboards and mice using the standard Java API
for keyboard and mouse support. Additionally, Java 3D provides access to a vari-
ety of continuous-input devices such as six-degrees-of-freedom (6DOF) trackers
and joysticks.
Continuous-input devices like 6DOF trackers and joysticks have well defined
continuous inputs. Trackers produce a position and orientation that Java 3D
stores internally as a transformation matrix. Joysticks produce two continuous
values in the range [–1.0, 1.0] that Java 3D stores internally as a transformation
matrix with an identity rotation (no rotation) and one of the joystick values as the
X translation and the other as the Y translation component.
Unfortunately, continuous-input devices do not have the same level of consis-
tency when it comes to their associated switches or buttons. Still, the number of
buttons or switches attached to a particular sensing element remains constant
across all sensing elements associated with a single device.
ble duty. They not only represent actual physical detectors but they also serve as
abstract six-degrees-of-freedom transformations that a Java 3D application can
access. The Sensor class is described in more detail in Section 11.2.3, “The Sen-
sor Object.”
Constants
These three flags control how Java 3D schedules reads. The BLOCKING flag signi-
fies that the driver for a device is a blocking driver and that it should be sched-
uled for regular reads by Java 3D. A blocking driver is a driver that can cause the
thread accessing the driver (the Java 3D implementation thread calling the pol-
lAndProcessInput method) to block while the data is being accessed from the
driver. The NON_BLOCKING flag signifies that the driver for a device is a non-
blocking driver and that it should be scheduled for regular reads by Java 3D. The
DEMAND_DRIVEN flag signifies that the Java 3D implementation should not sched-
ule regular reads on the sensors of this device; the Java 3D implementation will
only call pollAndProcessInput when one of the device’s sensors’ getRead
methods is called. A DEMAND_DRIVEN driver must always provide the current
value of the sensor on demand whenever pollAndProcessInput is called. This
means that DEMAND_DRIVEN drivers are non-blocking by definition.
Methods
These methods set and retrieve this device’s processing mode, one of BLOCKING,
NON_BLOCKING, or DEMAND_DRIVEN.
This method returns the number of Sensor objects associated with this device.
This method returns the specified Sensor associated with this device.
This method sets the device’s current position and orientation as the device’s
nominal position and orientation (that is, establishes its reference frame relative
to the “tracker base” reference frame). This method is most useful in defining a
nominal pose in immersive head-tracked situations.
This method first polls the device for data values and then processes the values
received from the device. For BLOCKING and NON_BLOCKING drivers, this method
is called regularly and the Java 3D implementation can cache the sensor values.
For DEMAND_DRIVEN drivers, this method is called each time one of the Sen-
sor.getRead methods is called, and is not otherwise called.
This method will not be called by the Java 3D implementation and should be
implemented as an empty method.
This method cleans up the device and relinquishes the associated resources. This
method should be called after the device has been unregistered from Java 3D via
the PhysicalEnvironment.removeInputDevice(InputDevice) method.
Once instantiated, the browser or application must register the device with the
Java 3D input device scheduler. The API for registering devices is specified in
Section 9.7, “The View Object.” The addInputDevice method introduces new
devices to the Java 3D environment and the allInputDevices method produces
an enumeration that allows examination of all available devices within a Java 3D
environment.
11.2 Sensors
The Java 3D API provides only an abstract concept of a device. Rather than
focusing on issues of devices and device models, it instead defines the concept of
a sensor. A sensor consists of a timestamped sequence of input values and the
state of the buttons or switches at the time that Java 3D sampled the value. A
sensor also contains a hotspot offset specified in that sensor’s local coordinate
system. If not specified, the hotspot is (0.0, 0.0, 0.0).
Since a typical hardware environment contains multiple sensing elements,
Java 3D maintains an array of sensors. Users can access a sensor directly from
their Java code or they can assign a sensor to one of Java 3D’s predefined 6DOF
entities such as UserHead.
Constants
The Sensor object specifies the following constants.
These flags define the Sensor’s predictor type. The first flag defines no predic-
tion. The second flag specifies to generate the value to correspond with the next
frame time.
These flags define the Sensor’s predictor policy. The first flag specifies to use no
prediction policy. The second flag specifies to assume that the sensor is predict-
ing head position or orientation. The third flag specifies to assume that the sensor
is predicting hand position or orientation.
Constructors
The Sensor object specifies the following constructors.
Constructs a Sensor object for the specified input device using default parame-
ters:
These methods construct a new Sensor object associated with the specified
device and consisting of either a default number of SensorReads or sensorRead-
Count number of SensorReads and a hot spot at (0.0, 0.0, 0.0) specified in the
sensor’s local coordinate system. The default for sensorButtonCount is zero.
These methods construct a new Sensor object associated with the specified
device and consisting of either sensorReadCount number of SensorReads or a
default number of SensorReads and an offset defining the sensor’s hot spot in the
sensor’s local coordinate system. The default for sensorButtonCount is zero.
Methods
These methods set and retrieve the number of SensorRead objects associated
with this sensor and the number of buttons associated with this sensor. Both the
number of SensorRead objects and the number of buttons are determined at Sen-
sor construction time.
These methods set and retrieve the sensor’s hotspot offset. The hotspot is speci-
fied in the sensor’s local coordinate system.
These methods extract the most recent sensor reading and the kth most recent
sensor reading from the Sensor object. In both cases, the methods copy the sen-
sor value into the specified argument.
The first method computes the sensor reading consistent with the prediction pol-
icy and copies that value into the read matrix. The second method computes the
sensor reading consistent as of time deltaT in the future and copies that value
into the read matrix. All times are in milliseconds.
These methods return the time associated with the most recent sensor reading
and with the kth most recent sensor reading, respectively.
public int lastButtons(int values[])
public void lastButtons(int k, int values[])
The first method places the most recent sensor reading value for each button into
the array parameter. The second method places the kth most recent sensor read-
ing value for each button into the array parameter, where 0 is the most-recent
sensor reading, 1 is the next most recent sensor reading, and so on. These meth-
ods will throw an ArrayIndexOutOfBoundsException if values.length is less
than the number of buttons.
These methods set and retrieve the sensor’s predictor policy. The predictor policy
is either PREDICT_NONE or PREDICT_NEXT_FRAME_TIME.
These methods set and retrieve the sensor’s predictor type. The predictor type is
one of the following: NO_PREDICTOR, HEAD_PREDICTOR, or HAND_PREDICTOR.
This method returns the current number of SensorRead objects per sensor.
The first method sets the next sensor read to the specified values; once these val-
ues are set via this method they become the current values returned by methods
such as lastRead(), lastTime() and lastButtons(); note that if there are no buttons
associated with this sensor then values can just be an empty array. The second
method sets the next SensorRead object to the specified values, including the
next SensorRead’s associated time, transformation, and button state array.
Constants
Constructors
The SensorRead object specifies the following constructor.
public SensorRead()
Methods
These methods set and retrieve the SensorRead object’s transform. They allow a
device to store a new rotation and orientation value into the SensorRead object,
and a consumer of that value to access it.
These methods set and retrieve the SensorRead object’s timestamp. They allow a
device to store a new timestamp value into the SensorRead object, and a con-
sumer of that value to access it.
These methods set and retrieve the SensorRead object’s button values. They
allow a device to store an integer that encodes the button values into the Sensor-
Read object, and a consumer of those values to access the state of the buttons.
This method returns the number of buttons associated with this SensorRead
object.
11.3 Picking
Behavior nodes provide the means for building developer-specific picking
semantics. An application developer can define custom picking semantics using
Java 3D’s behavior mechanism (see Chapter 10, “Behaviors and Interpolators”).
The developer might wish to define pick semantics that use a mouse to shoot a
ray into the virtual universe from the current viewpoint, find the first object along
that ray, and highlight that object when the end user releases the mouse button. A
typical scenario follows:
1. The application constructs a Behavior node that arms itself to awaken
when AWT detects a left-mouse-button-down event.
2. Upon awakening from a left-mouse-button-down event, the behavior
a. Updates a Switch node to draw a ray that emanates from the center of
the screen.
b. Changes that ray’s TransformGroup node so that the ray points in the
direction of the current mouse position.
c. Declares its interest in mouse-move or left-mouse-button-up events.
3. Upon awakening from a mouse-move event, the behavior
a. Changes that ray’s TransformGroup node so that the ray points in the
direction of the current mouse position.
b. Declares its interest in mouse-move or left-mouse-button-up events.
4. Upon awakening from a left-mouse-button-up event, the behavior
a. Changes that ray’s TransformGroup node so that the ray points in the
direction of the current mouse position.
b. Intersects the ray with all the objects in the virtual universe to find the
first object that the ray intersects.
c. Changes the appearance component of that object’s shape node to
highlight the selected object.
d. Declares its interest in left-mouse-button-down events.
Java 3D includes helping functions that aid in intersecting various geometric
objects with objects in the virtual universe by
• Intersecting an oriented ray with all the objects in the virtual universe. That
function can return the first object intersected along that ray, all the objects
that intersect that ray, or a list of all the objects along that ray sorted by dis-
tance from the ray’s origin.
• Intersecting a volume with all the objects in the virtual universe. That func-
tion returns a list of all the objects contained in that volume.
• Discovering which vertex within an object is closest to a specified ray.
Note: Picking and scene graph update are not synchronized. In Java 3D version
1.2, the elapsed time between a scene graph update and a pick (that uses the
updated scene graph) is about three frames.
must uniquely identify a specific instance of the terminal node. For nodes that
are not under a SharedGroup, the minimal SceneGraphPath consists of the
Locale and the terminal node itself. For nodes that are under a SharedGroup, the
minimal SceneGraphPath consists of the Locale, the terminal node, and a list of
all Link nodes in the path from the Locale to the terminal node. A SceneGraph-
Path may optionally contain other interior nodes that are in the path. A
SceneGraphPath is verified for correctness and uniqueness when it is sent as an
argument to other methods of Java 3D.
In the array of internal nodes, the node at index 0 is the node closest to the
Locale. The indices increase along the path to the terminal node, with the node at
index length-1 being the node closest to the terminal node. The array of nodes
does not contain either the Locale (which is not a node) or the terminal node.
During picking and intersection tests, the user specifies the subtree of the scene
graph that should be tested. The whole tree for a Locale is searched by providing
the Locale to the picking or intersection tests.
The SceneGraphPath object returned by the picking methods represents all the
components in the subgraph that have the capability ENABLE_PICK_REPORTING
set between the root of the subtree and the picked or intersected object. All Link
nodes are implicitly enabled for pick reporting. Note that ENABLE_PICK_REPORT-
ING and ENABLE_COLLISION_REPORTING are disabled by default. This means that
the picking and collision methods will return the minimal SceneGraphPath by
default.
When a SceneGraphPath is returned from the picking or collision methods of
Java 3D, it will also contain the value of the LocalToVworld transform of the ter-
minal node that was in effect at the time the pick or collision occurred.
Constructors
public SceneGraphPath()
These construct and initialize a new SceneGraphPath object. The first form spec-
ifies the path’s Locale object and the object in question. The second form
includes an array of nodes that fall in between the Locale and the object in ques-
tion, and which nodes have their ENABLE_PICK_REPORTING capability bit set. The
object parameter may be a Group, Shape3D, or Morph node. If any other type of
leaf node is specified, an IllegalArgumentException is thrown.
Methods
These methods set the path’s values. The first method sets the path’s interior val-
ues. The second method sets the path’s Locale to the specified Locale. The third
method sets the path’s object to the specified object (a Group node, or a Shape3D
or Morph leaf node). The fourth method replaces the link node associated with
the specified index with the specified newLink. The last method replaces all of
the link nodes with the new list of link nodes.
The first method returns the path’s Locale. The second method returns the path’s
object.
The first method returns the number of intermediate nodes in this path. The sec-
ond method returns the node associated with the specified index.
The set method sets the transform component of this SceneGraphPath to the
value of the passed transform. The get method returns a copy of the transform
associated with this SceneGraphPath. The method returns null if there is no
transform associated. If this SceneGraphPath was returned by a Java 3D picking
This method determines whether two SceneGraphPath objects represent the same
path in the scene graph. Either object might include a different subset of internal
nodes; only the internal link nodes, the Locale, and the Node itself are compared.
The paths are not validated for correctness or uniqueness.
public boolean equals(SceneGraphPath testPath)
public boolean equals(Object o1)
The first method returns true if all of the data members of path testPath are
equal to the corresponding data members in this SceneGraphPath. The second
method returns true if the Object o1 is of type SceneGraphPath and all of the
data members of o1 are equal to the corresponding data members in this
SceneGraphPath and if the values of the transforms are equal.
This method returns a hash number based on the data values in this object. Two
different SceneGraphPath objects with identical data values (that is, trans.-
equals(SceneGraphPath) returns true) will return the same hash number. Two
paths with different data members may return the same hash value, although this
is not likely.
This method returns a string representation of this object. The string contains the
class names of all nodes in the SceneGraphPath.
“PickShape Object”). The methods that return an array either return all the
picked objects or all the picked objects in sorted order starting with the objects
“closest” to the eyepoint and ending with the objects farthest from the eyepoint.
Methods that return a single SceneGraphPath return a single path object that
specifies either the object closest to the eyepoint or any picked object (this latter
method also implements the fastest pick operation possible). All ties in testing
for closest objects intersected result in an indeterminate order.
Constructors
public PickShape()
Constructors
public PickBounds()
public PickBounds(Bounds boundsObject)
The first constructor creates a PickBounds initialized with the bounds set to null.
The second constructor creates a PickBounds with the bounds set to boundsOb-
ject.
Methods
Constructors
public PickPoint()
public PickPoint(Point3d location)
The first constructor creates a PickPoint initialized to (0,0,0). The second con-
structor creates a PickPoint at the specified location.
Methods
Constructors
public PickRay()
public PickRay(Point3d origin, Vector3d direction)
The first constructor creates a PickRay initialized with an origin and direction of
(0,0,0). The second constructor creates a PickRay from the specified origin and
direction.
Methods
These methods set and retrieve the origin and direction of this PickRay
object.
Constructors
public PickSegment()
public PickSegment(Point3d start, Point3d end)
The first constructor creates a PickSegment object with the start and end of the
segment initialized to (0,0,0). The second constructor creates a PickSegment
object from the specified start and end points.
Methods
These methods set and retrieve the start and end points of this PickSegment
object.
Constructors
Constructs an empty PickCone. The origin and direction of the cone are initial-
ized to (0,0,0). The spread angle is initialized to π/64.
Methods
These three methods return the origin, direction, and spread angle of this Pick-
Cone, respectively.
The PickConeRay object is an infinite cone pick ray shape. It can be used as an
argument to the picking methods in BranchGroup and Locale.
Constructors
The first constructor creates an empty PickConeRay. The origin and direction of
the cone are initialized to (0,0,0). The spread angle is initialized to π/64 radian.
The second constructor creates an infinite cone pick shape from the specified
parameters.
Methods
This method sets the parameters of this PickCone to the specified values.
The PickConeSegment object is a finite cone segment pick shape. It can be used
as an argument to the picking methods in BranchGroup and Locale.
Constructors
The first constructor creates an empty PickConeSegment. The origin and end
point of the cone are initialized to (0,0,0). The spread angle is initialized to π/64
radians. The second constructor creates a finite cone pick shape from the speci-
fied parameters.
Methods
public void set(Point3d origin, Point3d end, double spreadAngle) New in 1.2
This method sets the parameters of this PickCone to the specified values.
The PickCylinder object is the abstract base class of all cylindrical pick shapes.
Constructors
This constructor creates an empty PickCylinder. The origin of the cylinder is ini-
tialized to (0,0,0). The radius is initialized to 0.
Methods
These three methods return the origin, radius, and direction of this PickCylinder
object.
11.3.12PickCylinderRay Object
The PickCylinderRay object is an infinite cylindrical ray pick shape. It can be
used as an argument to the picking methods in BranchGroup and Locale.
Constructors
The first constructor creates an empty PickCylinderRay. The origin and direction
of the cylindrical ray are initialized to (0,0,0). The radius is initialized to 0. The
second constructor creates an infinite cylindrical ray pick shape from the speci-
fied parameters.
Methods
public void set(Point3d origin, Vector3d direction, double radius) New in 1.2
This method sets the parameters of this PickCylinderRay to the specified values.
11.3.13PickCylinderSegment Object
The PickCylinderSegment object is a finite cylindrical segment pick shape. It can
be used as an argument to the picking methods in BranchGroup and Locale.
Constructors
The first constructor creates an empty PickCylinderSegment. The start and end
points of the cylindrical segment are initialized to (0,0,0). The radius is initial-
ized to 0.
Methods
public void set(Point3d start, Point3d end, double radius) New in 1.2
ing, setting particular audio device elements, and querying generic character-
istics for any audio device.
Constants
Specifies that audio playback will be through a single speaker some distance
away from the listener.
Specifies that audio playback will be through stereo speakers some distance
away from, and at some angle to, the listener.
12.1.1 Initialization
Each audio device driver must be initialized. The chosen device driver should be
initialized before any Java 3D Sound methods are executed because the imple-
mentation of the Sound methods, in general, is potentially device-driver depen-
dent.
Methods
Initialize the audio device. Exactly what occurs during initialization is imple-
mentation dependent. This method provides explicit control by the user over
when this initialization occurs.
public abstract boolean close()
Closes the audio device, releasing resources associated with this device.
• A monaural speaker.
• A pair of speakers, equally distant from the listener, both at some angle
from the head coordinate system Z axis. It’s assumed that the speakers are
at the same elevation and oriented symmetrically about the listener.
The type of playback chosen affects the sound image generated. Cross-talk can-
cellation is applied to the audio image if playback over stereo speakers is
selected.
Methods
The following methods affect the playback of sound processed by the Java 3D
sound renderer.
These methods set and retrieve the type of audio playback device (HEADPHONES,
MONO_SPEAKER, or STEREO_SPEAKERS) used to output the analog audio from ren-
dering Java 3D Sound nodes.
These methods set and retrieve the distance in meters from the center ear (the
midpoint between the left and right ears) and one of the speakers in the listener’s
environment. For monaural speaker playback, a typical distance from the listener
to the speaker in a workstation cabinet is 0.76 meters. For stereo speakers placed
at the sides of the display, this might be 0.82 meters.
These methods set and retrieve the angle, in radians, between the vectors from
the center ear to each of the speaker transducers and the vectors from the center
ear parallel to the head coordinate’s Z axis. Speakers placed at the sides of the
computer display typically range between 0.175 and 0.350 radians (between 10
and 20 degrees).
Methods
This method retrieves the maximum number of channels available for Java 3D
sound rendering for all sound sources.
During rendering, when Sound nodes are playing, this method returns the num-
ber of channels still available to Java 3D for rendering additional Sound nodes.
This method queries the number of channels that are used, or would be used to
render a particular sound node. This method returns the number of channels
needed to render a particular Sound node. The return value is the same no matter
if the Sound is currently active and enabled (being played) or is inactive.
Methods in this interface provide the Java 3D core a generic way to set and
query the audio device the application has chosen audio rendering to be per-
formed on. Methods in this interface include:
• Setup and clearing the sound as a sample on the device
• Start, stop, pause, unpause, mute, and unmute of sample on the device
• Set parameters for each sample corresponding to the fields in the Sound
node
• Set the current active aural parameters that affect all positional samples
Constants
These constants specify the sound types. Sound types match the Sound node
classes defined for Java 3D core for BackgroundSound, PointSound, and Cone-
Sound. The type of sound a sample is loaded as determines which methods affect
it.
These constants specify the sound data types. Samples can be processed as
streaming or buffered data. Fully spatializing sound sources may require data to
be buffered.
Sound data specified as streaming is not copied by the AudioDevice diver imple-
mentation. It is up the application to ensure that this data is continuously acces-
sible during sound rendering. Futhermore, full sound spatialization may not be
possible, for all AudioDevice3D implementations on unbuffered sound data.
Sound data specified as buffered is copied by the AudioDevice driver implemen-
tation.
Methods
This method accepts a reference to the current View and passes reference to the
current View Object. The PhysicalEnvironment parameters (with playback type
and speaker placement) and the PhysicalBody parameters (position and orienta-
tion of ears) can be obtained from this object, and the transformations to and
from ViewPlatform coordinate (the space the listener’s head is in) and Virtual
World coordinates (the space the sounds are in).
Prepare the sound. This method accepts a reference to the MediaContainer that
contains a reference to sound data and information about the type of data it is.
The soundType parameter defines the type of sound associated with this sample
(Background, Point, or Cone).
Depending on the type of MediaContainer the sound data is and on the imple-
mentation of the AudioDevice used, sound data preparation could consist of
opening, attaching, or loading sound data into the device. Unless the cached is
true, this sound data should not be copied, if possible, into host or device mem-
ory.
Once this preparation is complete for the sound sample, an AudioDevice-specific
index, used to reference the sample in future method calls, is returned. All the
rest of the methods described below require this index as a parameter.
Clear the sound. This method requests that the AudioDevice free all resources
associated with the sample with index id.
Query Sample duration. If it can be determined, this method returns the duration
in milliseconds of the sound sample. For non-cached streams, this method
returns Sound.DURATION_UNKNOWN.
Query the number of channels used by Sound. These methods return the number
of channels (on the executing audio device) that this sound is using, if it is play-
ing, or is expected to use if it were begun to be played. The first method takes the
sound’s current state (including whether it is muted or unmuted) into account.
The second method uses the muted parameter to make the determination.
For some AudioDevice3D implementations:
• Muted sounds take up channels on the systems mixer (because they’re ren-
dered as samples playing with gain zero).
Start sample. This method begins a sound playing on the AudioDevice and
returns a flag indicating whether or not the sample was started.
Stop sample. This method stops the sound on the AudioDevice and returns a flag
indicating whether or not the sample was stopped.
Query last start time for this sound on the device. This method returns the system
time of when the sound was last “started.” Note that this start time will be as
accurate as the AudioDevice implementation can make it, but that it is not guar-
anteed to be exact.
Set gain scale factor. This method sets the overall gain scale factor applied to
data associated with this source to increase or decrease its overall amplitude. The
gain scaleFactor value passed into this method is the combined value of the
Sound node’s initial gain and the current AuralAttribute gain scale factors.
Set distance gain. This method sets this sound’s distance gain elliptical attenua-
tion (not including the filter cutoff frequency) by defining corresponding arrays
containing distances from the sound’s origin and gain scale factors applied to all
active positional sounds. The gain scale factor is applied to sound based on the
distance the listener is from the sound source. These attenuation parameters are
ignored for BackgroundSound nodes. The backAttenuationScaleFactor
parameter is ignored for PointSound nodes.
For a full description of the attenuation parameters, see Section 6.9.3, “Cone-
Sound Node.”
Set AuralAttributes distance filter. This method sets the distance filter corre-
sponding arrays containing distances and frequency cutoff applied to all active
Version 1.2, March 2000 341
12.2 AudioDevice3D Interface AUDIO DEVICES
positional sounds. The gain scale factor is applied to sound based on the distance
the listener is from the sound source. For a full description of this parameter and
how it is used, see Section 8.1.17, “AuralAttributes Object.”
Set loop count. This method sets the number of times sound is looped during
play. For a complete description of this method, see the description for the
Sound.setLoop method in Section 6.9, “Sound Node.”
These methods mute and unmute a playing sound sample. The first method
makes a sample play silently. The second method makes a silently-playing sam-
ple audible. Ideally, the muting of a sample is implemented by stopping a sample
and freeing channel resources (rather than just setting the gain of the sample to
zero). Ideally, the un-muting of a sample restarts the muted sample by offset
from the beginning by the number of milliseconds since the time the sample
began playing.
These methods pause and unpause a playing sound sample. The first method
temporarily stops a cached sample from playing without resetting the sample’s
current pointer back to the beginning of the sound data so that it can be un-
paused at a later time from the same location in the sample when the pause was
initiated. The second method restarts the paused sample from the location in the
sample where it was paused.
Set position. This method sets this sound’s location (in Local coordinates) from
provided the position.
Set direction. This method sets this sound’s direction from the local coordinate
vector provided. For a full description of the direction parameter, see
Section 6.9.3, “ConeSound Node.”
Set virtual world transform. This method passes a reference to the concatenated
transformation to be applied to local sound position and direction parameters.
342 The Java 3D API Specification
AUDIO DEVICES AudioDevice3D Interface 12.2
Set AuralAttributes gain rolloff. This method sets the speed-of-sound factor. For
a full description of this parameter and how it is used, see Section 8.1.17,
“AuralAttributes Object.”
Set angular attenuation. This method sets this sound’s angular gain attenuation
(including filter) by defining corresponding arrays containing angular offsets
from the sound’s axis, gain scale factors, and frequency cutoff applied to all
active directional sounds. Gain scale factor is applied to sound based on the
angle between the sound’s axis and the ray from the sound source origin to the
listener. The form of the attenuation parameter is fully described in Section 6.9.3,
“ConeSound Node.”
Set AuralAttributes reverberation delay. This method sets the delay time between
each order of reflection (while reverberation is being rendered) explicitly given
in milliseconds. A value for delay time of 0.0 disables reverberation. For a full
description of this parameter and how it is used, see Section 8.1.17, “AuralAt-
tributes Object.”
Set AuralAttributes reverberation order. This method sets the number of times
reflections are added to reverberation being calculated. A value of –1 specifies an
unbounded number of reverberations. For a full description of this parameter and
how it is used, see Section 8.1.17, “AuralAttributes Object.”
Set AuralAttributes frequency scale factor. This method specifies a scale factor
applied to the frequency (or wavelength). This parameter can also be used to
expand or contract the usual frequency shift applied to the sound source due to
Doppler effect calculations. Valid values are ≥ 0.0. A value greater than 1.0 will
increase the playback rate. For a full description of this parameter and how it is
used, see Section 8.1.17, “AuralAttributes Object.”
Set AuralAttributes velocity scale factor. This method specifies a velocity scale
factor applied to the velocity of sound relative to listener’s position and move-
ment in relation to the sound’s position and movement. This scale factor is mul-
tiplied by the calculated velocity portion of Doppler effect equation used during
sound rendering. For a full description of this parameter and how it is used, see
Section 8.1.17, “AuralAttributes Object.”
JAVA 3D’s execution and rendering model assumes the existence of a Virtu-
alUniverse object and an attached scene graph. This scene graph can be minimal
and not noticeable from an application’s perspective when using immediate-
mode rendering, but it must exist.
Java 3D’s execution model intertwines with its rendering modes and with behav-
iors and their scheduling. This chapter first describes the three rendering modes,
then describes how an application starts up a Java 3D environment, and finally, it
discusses how the various rendering modes work within this framework.
Virtual Universe
Hi-res Locale
BranchGroup BG
TransformGroup TG
Physical Physical
Body Environment
Java 3D provides utility functions that create much of this structure on behalf of
a pure immediate-mode application, making it less noticeable from the applica-
tion’s perspective—but the structure must exist.
All rendering is done completely under user control. It is necessary for the user
to clear the 3D canvas, render all geometry, and swap the buffers. Additionally,
rendering the right and left eye for stereo viewing becomes the sole responsibil-
ity of the application.
In pure immediate mode, the user must stop the Java 3D renderer, via the
Canvas3D object stopRenderer() method, prior to adding the Canvas3D object
to an active View object (that is, one that is attached to a live ViewPlatform
object).
clear canvas
call preRender() // user-supplied method
set view
render opaque scene graph objects
call renderField(FIELD_ALL) // user-supplied method
render transparent scene graph objects
call postRender() // user-supplied method
synchronize and swap buffers
call postSwap() // user-supplied method
In both cases, the entire loop, beginning with clearing the canvas and ending with
swapping the buffers, defines a frame. The application is given the opportunity to
render immediate-mode geometry at any of the clearly identified spots in the ren-
dering loop. A user specifies his or her own rendering methods by extending the
Canvas3D class and overriding the preRender, postRender, postSwap, and/or
renderField methods.
Constants
These constants specify the field that the rendering loop for this Canvas3D is
rendering. The FIELD_LEFT and FIELD_RIGHT values indicate the left and right
fields of a field-sequential stereo rendering loop, respectively. The FIELD_ALL
value indicates a monoscopic or single-pass stereo rendering loop.
Methods
This method returns the 2D graphics object associated with this Canvas3D. A
new 2D graphics object is created if one does not already exist. See
“J3DGraphics2D.”
Applications that wish to perform operations in the rendering loop prior to any
actual rendering must override this method. The Java 3D rendering loop invokes
this method after clearing the canvas and before any rendering has been done for
this frame. Applications should not call this method.
Applications that wish to perform operations in the rendering loop following any
actual rendering must override this method. The Java 3D rendering loop invokes
this method after completing all rendering to the canvas for this frame and before
the buffer swap. Applications should not call this method.
Applications that wish to perform operations at the very end of the rendering
loop must override this method. The Java 3D rendering loop invokes this method
after completing all rendering to this canvas, and all other canvases associated
with the current view, for this frame following the buffer swap. Applications that
wish to perform operations at the very end of the rendering loop may override
this function. In off-screen mode, all rendering is copied to the off-screen buffer
before this method is called. Applications should not call this method.
Applications that wish to perform operations during the rendering loop must
override this function. The Java 3D rendering loop invokes this method, possibly
twice, during the loop. It is called once for each field (once per frame on a mono-
scopic system or once each for the right eye and left eye on a field-sequential ste-
reo system). This method is called after all opaque objects are rendered and
before any transparent objects are rendered (subject to restrictions imposed by
OrderedGroup nodes). This is intended for use by applications that want to mix
retained/compiled-retained mode rendering with some immediate-mode render-
ing. The fieldDesc parameter is the field description: FIELD_LEFT, FIELD_
RIGHT, or FIELD_ALL. Applications that wish to work correctly in stereo mode
should render the same image for both FIELD_LEFT and FIELD_RIGHT calls. If
Java 3D calls the renderer with FIELD_ALL, the immediate-mode rendering only
needs to be done once. Applications should not call this method.
These methods start or stop the Java 3D renderer for this Canvas3D object. If the
Java 3D renderer is currently running when stopRenderer is called, the render-
ing will be synchronized before being stopped. No further rendering will be done
to this canvas by Java 3D until the renderer is started again. If the Java 3D ren-
derer is not currently running when startRenderer is called, any rendering to
other Canvas3D objects sharing the same View will be synchronized before this
Canvas3D’s renderer is (re)started.
This method synchronizes and swaps buffers on a double-buffered canvas for this
Canvas3D object. This method should only be called if the Java 3D renderer has
been stopped. In the normal case, the renderer automatically swaps the buffer.
This method calls the flush(true) methods of the associated 2D and 3D graph-
ics contexts, if they have been allocated. If the application invokes this method
and the canvas has a running Java 3D renderer, a RestrictedAccessException
exception is thrown. An IllegalStateExceptiion is thrown if this Canvas3D is
in off-screen mode.
14.3.1 GraphicsContext3D
The GraphicsContext3D object is used for immediate-mode rendering into a 3D
canvas. It is created by, and associated with, a specific Canvas3D object. A
GraphicsContext3D class defines methods that manipulate 3D graphics state
attributes and draw 3D geometric primitives.
Note that the drawing methods in this class are not necessarily executed immedi-
ately. They may be buffered up for future execution. Applications must call the
flush(boolean) method to ensure that the rendering actually happens. The
flush method is implicitly called in the following cases:
Constants
These constants specify whether rendering is done to the left eye, the right eye,
or to both.
Constructors
There are no publicly accessible constructors of GraphicsContext3D. An applica-
tion obtains a 3D graphics context object from the Canvas3D object into which
the application wishes to render by using the getGraphicsContext3D method.
The Canvas3D object creates a new GraphicsContext3D the first time an applica-
tion invokes getGraphicsContext3D. A new GraphicsContext3D initializes its
state variables to the following defaults:
Parameters Default Values
Background object null
Fog object null
ModelClip object null
Appearance object null
List of Light objects empty
High-Res coordinates (0, 0, 0,)
modelTransform identity
AuralAttributes object null
List of Sound objects empty
buffer override false
front buffer rendering false
stereo mode STEREO_BOTH
Methods
These methods access or modify the current Appearance component object used
by this 3D graphics context. The graphics context stores a reference to the spec-
ified Appearance object. This means that the application may modify individual
appearance attributes by using the appropriate methods on the Appearance object
(see Section 8.1.2, “Appearance Object”). The Appearance component object
must not be part of a live scene graph, nor may it subsequently be made part of a
live scene graph—an IllegalSharingException is thrown in such cases. If the
Appearance object is null, default values will be used for all appearance
attributes—it is as if an Appearance node were created using the default con-
structor.
These methods access or modify the current Background leaf node object used
by this 3D graphics context. The graphics context stores a reference to the spec-
ified Background node. This means that the application may modify the back-
ground color or image by using the appropriate methods on the Background node
object (see Section 6.4, “Background Node”). The Background node must not be
part of a live scene graph, nor may it subsequently be made part of a live scene
graph—an IllegalSharingException is thrown in such cases. If the Back-
ground object is null, the default background color of black (0,0,0) is used to
clear the canvas prior to rendering a new frame. The Background node’s applica-
tion region is ignored for immediate-mode rendering.
These methods access or modify the current Fog leaf node object used by this 3D
graphics context. The graphics context stores a reference to the specified Fog
node. This means that the application may modify the fog attributes using the
appropriate methods on the Fog node object (see Section 6.7, “Fog Node”). The
Fog node must not be part of a live scene graph, nor may it subsequently be
made part of a live scene graph—an IllegalSharingException is thrown in
such cases. If the Fog object is null, fog is disabled. Both the region of influence
and the hierarchical scope of the Fog node are ignored for immediate-mode ren-
dering.
These methods access or modify the list of lights used by this 3D graphics con-
text. The addLight method adds a new light to the end of the list of lights. The
insertLight method inserts a new light before the light at the specified index.
The setLight method replaces the light at the specified index with the light pro-
vided. The removeLight method removes the light at the specified index. The
numLights method returns a count of the number of lights in the list. The
getLight method returns the light at the specified index. The getAllLights
method retrieves the Enumeration object of all lights.
The graphics context stores a reference to each light object in the list of lights.
This means that the application may modify the light attributes for any of the
lights using the appropriate methods on that Light node object (see Section 6.8,
“Light Node”). None of the Light nodes in the list of lights may be part of a live
scene graph, nor may they subsequently be made part of a live scene graph—an
IllegalSharingException is thrown in such cases. Adding a null Light object
to the list will result in a NullPointerException. Both the region of influence
and the hierarchical scope of all lights in the list are ignored for immediate-mode
rendering.
These methods access or modify the current model transform. The multiply-
ModelTransform method multiplies the current model transform by the specified
transform and stores the result back into the current model transform. The speci-
fied transformation must be affine. A BadTransformException is thrown (see
These methods set and retrieve a flag that specifies whether the double buffering
and stereo mode from the Canvas3D are overridden. When set to true, this
attribute enables the frontBufferRendering and stereoMode attributes.
These methods set and retrieve a flag that enables or disables immediate mode
rendering into the front buffer of a double buffered Canvas3D. This attribute is
only used when the bufferOverride flag is enabled. Note that this attribute has
no effect if double buffering is disabled or is not available on the Canvas3D.
These methods set and retrieve the current stereo mode for immediate mode ren-
dering. The parameter specifies which stereo buffer or buffers is rendered into.
This attribute is only used when the bufferOverride flag is enabled. The stereo
mode is one of the following: STEREO_LEFT, STEREO_RIGHT, or STEREO_BOTH.
Note that this attribute has no effect if stereo is disabled or is not available on the
Canvas3D.
These methods set and retrieve the current ModelClip leaf node. The set method
sets the ModelClip to the specified object. The graphics context stores a refer-
ence to the specified ModelClip node. This means that the application may mod-
ify the model clipping attributes using the appropriate methods on the ModelClip
node object. The ModelClip node must not be part of a live scene graph, nor may
it subsequently be made part of a live scene graph-an IllegalSharingException is
thrown in such cases. If the ModelClip object is null, model clipping is disabled.
Both the region of influence and the hierarchical scope of the ModelClip node
are ignored for immediate-mode rendering.
These methods access or modify the list of sounds used by this 3D graphics con-
text. The addSound method appends the specified sound to this graphics con-
text’s list of sounds. The insertSound method inserts the specified sound at the
specified index location. The setSound method replaces the specified sound with
the sound provided. The removeSound method removes the sound at the speci-
fied index location. The numSounds method retrieves the current number of
sounds in this graphics context. The getSound method retrieves the index-
selected sound. The isSoundPlaying method retrieves the sound-playing flag.
The getAllSounds method retrieves the Enumeration object of all the sounds.
The graphics context stores a reference to each sound object in the list of sounds.
This means that the application may modify the sound attributes for any of the
sounds by using the appropriate methods on that Sound node object (see
Section 6.9, “Sound Node”). None of the Sound nodes in the list of sounds may
be part of a live scene graph, nor may they subsequently be made part of a live
scene graph—an IllegalSharingException is thrown in such cases. Adding a
null Sound object to the list results in a NullPointerException. If the list of
sounds is empty, sound rendering is disabled.
Adding or inserting a sound to the list of sounds implicitly starts the sound play-
ing. Once a sound is finished playing, it can be restarted by setting the sound’s
enable flag to true. The scheduling region of all sounds in the list is ignored for
immediate-mode rendering.
This method reads an image from the frame buffer and copies it into the Image-
Component or DepthComponent objects referenced by the specified Raster
object. All parameters of the Raster object and the component ImageComponent
or DepthComponent objects must be set to the desired values prior to calling this
method. These values determine the location, size, and format of the pixel data
that is read. This method calls flush(true) prior to reading the frame buffer.
This method clears the canvas to the color or image specified by the current
Background leaf node object.
The first draw method draws the specified Geometry component object using the
current state in the graphics context. The second draw method draws the speci-
fied Shape3D leaf node object. This is a convenience method that is identical to
calling the setAppearance(Appearance) and draw(Geometry) methods passing
the Appearance and Geometry component objects of the specified Shape3D
nodes as arguments.
Methods
These methods are not supported. The only way to obtain a J3DGraphics2D is
from the associated Canvas3D.
These methods are not supported. Clearing a Canvas3D is done implicitly via a
Background node in the scene graph or explicitly via the clear method in a 3D
graphics context.
Variables
The component values of a Tuple2d are directly accessible through the public
variables x and y. To access the x component of a Tuple2d called upperLeft-
Corner, a programmer would write upperLeftCorner.x. The programmer
would access the y component similarly.
Tuple Objects
Tuple2d
Point2d
Vector2d
Tuple2f
Point2f
TexCoord2f
Vector2f
Tuple3b
Color3b
Tuple3d
Point3d
Vector3d
Tuple3f
Color3f
Point3f
TexCoord3f
Vector3f
Tuple3i
Point3i
Tuple4b
Color4b
Tuple4d
Point4d
Quat4d
Vector4d
Tuple4f
Color4f
Point4f
Quat4f
Vector4f
Tuple4i
Point4i
AxisAngle4d
AxisAngle4f
GVector
Matrix Objects
Matrix3f
Matrix3d
Matrix4f
Matrix4d
GMatrix
public double x
Public double y
Constructors
These five constructors each return a new Tuple2d. The first constructor gener-
ates a Tuple2d from two double-precision floating-point numbers x and y. The
second constructor generates a Tuple2d from the first two elements of array t.
The third and fourth constructors generate a Tuple2d from the tuple t1. The final
constructor generates a Tuple2d with the value of (0.0, 0.0).
Methods
The first set method sets the value of this tuple to the specified xy coordinates.
The second set method sets the value of this tuple from the two values specified
in the array t. The third and fourth set methods set the value of this tuple to the
value of the tuple t1. The get method copies the value of the elements of this
tuple into the array t.
The first add method sets the value of this tuple to the vector sum of tuples v1
and v2. The second add method sets the value of this tuple to the vector sum of
itself and tuple t1. The first sub method sets the value of this tuple to the vector
difference of tuple t1 and t2 (this = t1 – t2). The second sub method sets the
value of this tuple to the vector difference of itself and tuple t1 (this = this – t1).
The first negate method sets the value of this tuple to the negation of tuple t1.
The second method negates the value of this vector in place.
The first scale method multiplies each element of the tuple t1 by the scale fac-
tor s and places the resulting scaled tuple into this. The second method multi-
plies each element of this tuple by the scale factor s and places the resulting
scaled tuple into this. The first scaleAdd method scales this tuple by the scale
factor s, adds the result to tuple t1, and places the result into the tuple this (this
= s*this + t1). The second scaleAdd method scales tuple t1 by the scale factor
s, adds the result to tuple t1, then places the result into the tuple this (this =
s*t1 + t2).
This method sets each component of the tuple parameter to its absolute value and
places the modified values into this tuple.
The first clamp method clamps this tuple to the range [min, max]. The second
clamp method clamps the values from tuple t to the range [min, max] and assigns
these clamped values to this tuple. The first clampMin method clamps each value
of this tuple to the min parameter. The second clampMin method clamps each
value of the tuple t and assigns these clamped values to this tuple. The first
clampMax method clamps each value of this tuple to the max parameter. The sec-
ond clampMax method clamps each value of tuple t to the max parameter and
assigns these clamped values to this tuple. In each method the values of tuple t
remain unchanged.
The first method linearly interpolates between tuples t1 and t2 and places the
result into this tuple (this = (1 – alpha) * t1 + alpha * t2). The second method lin-
early interpolates between this tuple and tuple t1 and places the result into this
tuple (this = (1 – alpha) * this + alpha * t1).
The first method returns true if all of the data members of tuple t1 are equal to
the corresponding data members in this tuple. The second method returns true if
the Object t1 is of type Tuple2d and all of the data members of t1 are equal to
the corresponding data members in this Tuple2d.
This method returns true if the L∞ distance between this tuple and tuple t1 is
less than or equal to the epsilon parameter. Otherwise, this method returns
false. The L∞ distance is equal to
The hashCode method returns a hash number based on the data values in this
object. Two Tuple2d objects with identical data values (that is,
equals(Tuple2d) returns true) will return the same hash number. Two objects
with different data members may return the same hash number, although this is
not likely.
This method returns a string that contains the values of this Tuple2d.
Constructors
These seven constructors each return a new Point2d. The first constructor gener-
ates a Point2d from two double-precision floating-point numbers x and y. The
second constructor generates a Point2d from the first two elements of array p.
The third and fourth constructors generate a Point2d from the point p1. The fifth
and sixth constructors generate a Point2d from the tuple t1. The final constructor
generates a Point2d with the value of (0.0, 0.0).
Methods
This method computes the L1 (Manhattan) distance between this point and point
p1. The L1 distance is equal to
abs ( x1 – x2 ) + abs ( y1 – y2 )
This method computes the L∞ distance between this point and point p1. The L∞
distance is equal to
Constructors
These seven constructors each return a new Vector2d. The first constructor gen-
erates a Vector2d from two floating-point numbers x and y. The second construc-
tor generates a Vector2d from the first two elements of array v. The third and
fourth constructors generate a Vector2d from the vector v1. The fifth and sixth
constructors generate a Vector2d from the specified tuple t1. The final construc-
tor generates a Vector2d with the value of (0.0, 0.0).
Methods
The dot method computes the dot product between this vector and vector v1 and
returns the resulting value.
The lengthSquared method computes the square of the length of the vector
this and returns its length as a double-precision floating-point number. The
length method computes the length of the vector this and returns its length as
a double-precision floating-point number.
The first normalize method normalizes the vector v1 to unit length and places
the result in this. The second normalize method normalizes the vector this
and places the resulting unit vector back into this.
This method returns the angle, in radians, between this vector and vector v1. The
return value is constrained to the range [0, π].
Variables
The component values of a Tuple2f are directly accessible through the public
variables x and y. To access the x component of a Tuple2f called upperLeftCor-
ner, a programmer would write upperLeftCorner.x. The programmer would
access the y component similarly.
public float x
public float y
Constructors
These five constructors each return a new Tuple2f. The first constructor generates
a Tuple2f from two floating-point numbers x and y. The second constructor gen-
erates a Tuple2f from the first two elements of array t. The third and fourth con-
structors generate a Tuple2f from the tuple t1. The final constructor generates a
Tuple2f with the value of (0.0, 0.0).
Methods
The set methods set the value of tuple this to the values provided. The get
method copies the values of the elements of this tuple into the array t.
The first add method computes the element-by-element sum of tuples t1 and t2,
placing the result in this. The second add method computes the element-by-ele-
ment sum of this tuple and tuple t1, placing the result in this. The first sub
method performs an element-by-element subtraction of tuple t2 from tuple t1
370 The Java 3D API Specification
MATH OBJECTS Tuple2f Class A.1.2
and places the result in this (this = t1 – t2). The second sub method performs an
element-by-element subtraction of t1 from this and places the result in this
(this = this – t1).
The first negate method sets the values of this tuple to the negative of the values
from tuple t1. The second negate method negates the tuple this and places the
resulting tuple back into this.
The first scale method multiplies each element of the tuple t1 by the scale fac-
tor s and places the resulting scaled tuple into this. The second scale method
multiplies each element of this tuple by the scale factor s and places the resulting
scaled tuple into this. The first scaleAdd method scales this tuple by the scale
factor s, adds the result to tuple t1, and places the result into the tuple this (this
= s*this + t1). The second scaleAdd method scales tuple t1 by the scale factor
s, adds the result to tuple t2, then places the result into the tuple this (this =
s*t1 + t2).
The first absolute method sets each component of this tuple to its absolute
value. The second absolute method sets each component of this tuple to the
absolute value of the corresponding component in tuple t.
The first clamp method clamps this tuple to the range [min, max]. The second
clamp method clamps the values from tuple t to the range [min, max] and assigns
these clamped values to this tuple. The first clampMin method clamps each value
of this tuple to the min parameter. The second clampMin method clamps each
value of the tuple t and assigns these clamped values to this tuple. The first
clampMax method clamps each value of this tuple to the max parameter. The sec-
ond clampMax method clamps each value of tuple t to the max parameter and
assigns these clamped values to this tuple. In each method the values of tuple t
remain unchanged.
The first method linearly interpolates between tuples t1 and t2 and places the
result into this tuple (this = (1 – alpha) * t1 + alpha * t2). The second method lin-
early interpolates between this tuple and tuple t1 and places the result into this
tuple (this = (1 – alpha) * this + alpha * t1).
The first method returns true if all of the data members of tuple t1 are equal to
the corresponding data members in this tuple. The second method returns true if
the Object t1 is of type Tuple2f and all of the data members of t1 are equal to
the corresponding data members in this Tuple2f.
This method returns true if the L∞ distance between this tuple and tuple t1 is
less than or equal to the epsilon parameter. Otherwise, this method returns
false. The L∞ distance is equal to
The hashCode method returns a hash number based on the data values in this
object. Two Tuple2f objects with identical data values (that is, equals(Tuple2f)
returns true) will return the same hash number. Two objects with different data
members may return the same hash number, although this is not likely.
This method returns a string that contains the values of this Tuple2f.
Constructors
These seven constructors each return a new Point2f. The first constructor gener-
ates a Point2f from two floating-point numbers x and y. The second constructor
generates a Point2f from the first two elements of array p. The third and fourth
constructors generate a Point2f from the point p1. The fifth and sixth construc-
tors generate a Point2f from the tuple t1. The final constructor generates a
Point2f with the value of (0.0, 0.0).
Methods
This method computes the L1 (Manhattan) distance between this point and point
p1. The L1 distance is equal to
abs ( x1 – x2 ) + abs ( y1 – y2 )
This method computes the L∞ distance between this point and point p1. The L∞
distance is equal to
Constructors
These seven constructors each return a new Vector2f. The first constructor gener-
ates a Vector2f from two floating-point numbers x and y. The second constructor
generates a Vector2f from the first two elements of array v. The third and fourth
constructors generate a Vector2f from the vector v1. The fifth and sixth construc-
tors generate a Vector2f from the specified tuple t1. The final constructor gener-
ates a Vector2f with the value of (0.0, 0.0).
Methods
The dot method computes the dot product between this vector and vector v1 and
returns the resulting value.
The lengthSquared method computes the square of the length of the vector
this and returns its length as a single-precision floating-point number. The
length method computes the length of the vector this and returns its length as
a single-precision floating-point number.
The first normalize method normalizes the vector v1 to unit length and places
the result in this. The second normalize method normalizes the vector this
and places the resulting unit vector back into this.
This method returns the angle, in radians, between this vector and vector v1. The
return value is constrained to the range [0, π].
Constructors
These five constructors each return a new TexCoord2f. The first constructor gen-
erates a TexCoord2f from two floating-point numbers x and y. The second con-
structor generates a TexCoord2f from the first two elements of array v. The third
constructor generates a TexCoord2f from the TexCoord2f v1. The fourth con-
structor generates a TexCoord2f from the Tuple2f t1. The final constructor gen-
erates a TexCoord2f with the value of (0.0, 0.0).
If intValue is greater than 127, then byteVariable will be negative. The correct
value will be extracted when it is used (by masking off the upper bits).
Variables
The component values of a Tuple3b are directly accessible through the public
variables x, y, and z. To access the x (red) component of a Tuple3b called
myColor, a programmer would write myColor.x. The programmer would access
the y (green) and z (blue) components similarly.
public byte x
public byte y
public byte z
Constructors
These four constructors each return a new Tuple3b. The first constructor gener-
ates a Tuple3b from three bytes b1, b2, and b3. The second constructor generates
a Tuple3b from the first three elements of array t. The third constructor gener-
ates a Tuple3b from the byte-precision Tuple3b t1. The final constructor gener-
ates a Tuple3b with the value of (0.0, 0.0, 0.0).
Methods
This method returns a string that contains the values of this Tuple3b.
The first set method sets the values of the x, y, and z data members of this
Tuple3b to the values in the array t of length three. The second set method sets
the values of the x, y, and z data members of this Tuple3b to the values in the
argument tuple t1. The first get method places the values of the x, y, and z com-
ponents of this Tuple3b into the array t of length three. The second get method
places the values of the x, y, and z components of this Tuple3b into the tuple t1.
public boolean equals(Tuple3b t1)
public boolean equals(Object t1)
The first method returns true if all of the data members of Tuple3b t1 are equal
to the corresponding data members in this tuple. The second method returns true
if the Object t1 is of type Tuple3b and all of the data members of t1 are equal to
the corresponding data members in this Tuple3b.
This method returns a hash number based on the data values in this object. Two
different Tuple3b objects with identical data values (that is, equals(Tuple3b)
returns true) will return the same hash number. Two tuples with different data
members may return the same hash value, although this is not likely.
Constructors
These five constructors each return a new Color3b. The first constructor gener-
ates a Color3b from three bytes c1, c2, and c3. The second constructor generates
a Color3b from the first three elements of array c. The third constructor gener-
ates a Color3b from the byte-precision Color3b c1. The fourth constructor gener-
ates a Color3b from the tuple t1. The fifth constructor generates a Color3b from
the specified AWT Color object. The final constructor generates a Color3b with
the value of (0.0, 0.0, 0.0).
Methods
The set method sets the R,G,B values of this Color3b object to those of the spec-
ified AWT Color object. The get method returns a new AWT Color object initial-
ized with the R,G,B values of this Color3b object.
Variables
The component values of a Tuple3d are directly accessible through the public
variables x, y, and z. To access the x component of a Tuple3d called upperLeft-
Corner, a programmer would write upperLeftCorner.x. The programmer
would access the y and z components similarly.
public double x
public double y
public double z
Constructors
These five constructors each return a new Tuple3d. The first constructor gener-
ates a Tuple3d from three floating-point numbers x, y, and z. The second con-
structor generates a Tuple3d from the first three elements of array t. The third
constructor generates a Tuple3d from the double-precision Tuple3d t1. The
fourth constructor generates a Tuple3d from the single-precision Tuple3f t1. The
final constructor generates a Tuple3d with the value of (0.0, 0.0, 0.0).
Methods
The four set methods set the value of tuple this to the values specified or to the
values of the specified vectors. The two get methods copy the x, y, and z values
into the array t of length three.
The first add method computes the element-by-element sum of tuples t1 and t2
and places the result in this. The second add method computes the ele-
ment-by-element sum of this tuple and tuple t1 and places the result into this.
The first sub method performs an element-by-element subtraction of tuple t2
from tuple t1 and places the result in this (this = t1 – t2). The second sub
method performs an element-by-element subtraction of tuple t1 from this tuple
and places the result in this (this = this – t1).
The first negate method sets the values of this tuple to the negative of the values
from tuple t1. The second negate method negates the tuple this and places the
resulting tuple back into this.
The first scale method multiplies each element of the tuple t1 by the scale fac-
tor s and places the resulting scaled tuple into this. The second scale method
multiplies each element of this tuple by the scale factor s and places the result-
ing scaled tuple back into this. The first scaleAdd method scales this tuple by
the scale factor s, adds the result to tuple t1, and places the result into tuple this
(this = s*this + t1). The second scaleAdd method scales the tuple t1 by the scale
factor s, adds the result to the tuple t2, and places the result into the tuple this
(this = s*t1 + t2).
This method returns a string that contains the values of this Tuple3d. The form is
(x, y, z).
This method returns a hash number based on the data values in this object. Two
different Tuple3d objects with identical data values (that is, equals(Tuple3d)
returns true) will return the same hash number. Two tuples with different data
members may return the same hash value, although this is not likely.
The first method returns true if all of the data members of Tuple3d v1 are equal
to the corresponding data members in this Tuple3d. The second method returns
true if the Object t1 is of type Tuple3d and all of the data members of t1 are
equal to the corresponding data members in this Tuple3d.
This method returns true if the L∞ distance between this tuple and tuple t1 is
less than or equal to the epsilon parameter. Otherwise, this method returns
false. The L∞ distance is equal to
The first absolute method sets each component of this tuple to its absolute
value. The second absolute method sets each component of this tuple to the
absolute value of the corresponding component in tuple t.
The first clamp method clamps this tuple to the range [min, max]. The second
clamp method clamps the values from tuple t to the range [min, max] and assigns
these clamped values to this tuple. The first clampMin method clamps each value
of this tuple to the min parameter. The second clampMin method clamps each
value of the tuple t and assigns these clamped values to this tuple. The first
clampMax method clamps each value of this tuple to the max parameter. The sec-
ond clampMax method clamps each value of tuple t to the max parameter and
assigns these clamped values to this tuple. In each method, the values of tuple t
remain unchanged.
The first interpolate method linearly interpolates between tuples t1 and t2 and
places the result into this tuple (this = (1 – alpha) * t1 + alpha * t2). The second
interpolate method linearly interpolates between this tuple and tuple t1 and
places the result into this tuple (this = (1 – alpha) * this + alpha * t1).
Constructors
These seven constructors each return a new Point3d. The first constructor gener-
ates a Point3d from three floating-point numbers x, y, and z. The second con-
structor generates a Point3d from the first three elements of array p. The third
constructor generates a Point3d from the double-precision Point3d p1. The fourth
constructor generates a Point3d from the single-precision Point3f p1. The fifth
and sixth constructors generate a Point3d from the tuple t1. The final constructor
generates a Point3d with the value of (0.0, 0.0, 0.0).
Version 1.2, March 2000 381
A.1.4 Tuple3d Class MATH OBJECTS
Methods
This method computes the L1 (Manhattan) distance between this point and point
p1. The L1 distance is equal to
This method computes the L∞ distance between this point and point p1. The L∞
distance is equal to
This method multiplies each of the x, y, and z components of the Point4d param-
eter p1 by 1/w and places the projected values into this point.
Constructors
These seven constructors each return a new Vector3d. The first constructor gen-
erates a Vector3d from three floating-point numbers x, y, and z. The second con-
structor generates a Vector3d from the first three elements of array v. The third
constructor generates a Vector3d from the double-precision vector v1. The fourth
constructor generates a Vector3d from the single-precision vector v1. The fifth
and sixth constructors generate a Vector3d from the tuple t1. The final construc-
tor generates a Vector3d with the value of (0.0, 0.0, 0.0).
Methods
The cross method computes the vector cross-product of vectors v1 and v2 and
places the result in this.
The first normalize method normalizes the vector v1 to unit length and places
the result in this. The second normalize method normalizes the vector this
and places the resulting unit vector back into this.
The dot method returns the dot product of this vector and vector v1.
The lengthSquared method returns the squared length of this vector. The
length method returns the length of this vector.
This method returns the angle, in radians, between this vector and the vector v1
parameter. The return value is constrained to the range [0, π].
Variables
The component values of a Tuple3f are directly accessible through the public
variables x, y, and z. To access the x component of a Tuple3f called upperLeft-
Version 1.2, March 2000 383
A.1.5 Tuple3f Class MATH OBJECTS
public float x
public float y
public float z
Constructors
These five constructors each return a new Tuple3f. The first constructor generates
a Tuple3f from three floating-point numbers x, y, and z. The second constructor
generates a Tuple3f from the first three elements of array t. The third constructor
generates a Tuple3f from the double-precision Tuple3d t1. The fourth construc-
tor generates a Tuple3f from the single-precision Tuple3f t1. The final construc-
tor generates a Tuple3f with the value of (0.0, 0.0, 0.0).
Methods
This method returns a string that contains the values of this Tuple3f.
The four set methods set the value of vector this to the coordinates provided or
to the values of the vectors provided. The first get method gets the value of this
vector and copies the values into the array t. The second get method gets the
value of this vector and copies the values into tuple t.
The first add method computes the element-by-element sum of tuples t1 and t2,
placing the result in this. The second add method computes the element-by-ele-
ment sum of this and tuple t1 and places the result in this. The first sub
method performs an element-by-element subtraction of tuple t2 from tuple t1
and places the result in this (this = t1 – t2). The second sub method performs an
element-by-element subtraction of tuple t1 from this tuple and places the result
into this (this = this – t1).
The first negate method sets the values of this tuple to the negative of the values
from tuple t1. The second negate method negates the vector this and places the
resulting tuple back into this.
The first scale method multiplies each element of the vector t1 by the scale fac-
tor s and places the resulting scaled vector into this. The second scale method
multiples the vector this by the scale factor s and replaces this with the scaled
value. The first scaleAdd method scales this tuple by the scale factor s, adds the
result to tuple t1, and places the result into tuple this (this = s*this + t1). The
second scaleAdd method scales the tuple t1 by the scale factor s, adds the result
to the tuple t2, and places the result into the tuple this (this = s*t1 + t2).
The first method returns true if all of the data members of tuple t1 are equal to
the corresponding data members in this Tuple3f. The second method returns true
if the Object t1 is of type Tuple3f and all of the data members of t1 are equal to
the corresponding data members in this Tuple3f.
This method returns true if the L∞ distance between this tuple and tuple t1 is
less than or equal to the epsilon parameter. Otherwise, this method returns
false. The L∞ distance is equal to
The first absolute method sets each component of this tuple to its absolute
value. The second absolute method sets each component of this tuple to the
absolute value of the corresponding component in tuple t.
The first clamp method clamps this tuple to the range [min, max]. The second
clamp method clamps the values from tuple t to the range [min, max] and assigns
these clamped values to this tuple. The first clampMin method clamps each value
of this tuple to the min parameter. The second clampMin method clamps each
value of the tuple t and assigns these clamped values to this tuple. The first
clampMax method clamps each value of this tuple to the max parameter. The sec-
ond clampMax method clamps each value of tuple t to the max parameter and
assigns these clamped values to this tuple. In each method the values of tuple t
remain unchanged.
The first method linearly interpolates between tuples t1 and t2 and places the
result into this tuple (this = (1 – alpha) * t1 + alpha * t2). The second method lin-
early interpolates between this tuple and tuple t1 and places the result into this
tuple (this = (1–alpha) * this + alpha * t1).
This method returns a hash number based on the data values in this object. Two
different Tuple3f objects with identical data values (that is, equals(Tuple3f)
returns true) will return the same hash number. Two tuples with different data
members may return the same hash value, although this is not likely.
Constructors
These seven constructors each return a new Point3f. The first constructor gener-
ates a point from three floating-point numbers x, y, and z. The second construc-
tor (Point3f(float p[]) generates a point from the first three elements of array
p. The third constructor generates a point from the double-precision point p1.
The fourth constructor generates a point from the single-precision point p1. The
fifth and sixth constructors generate a Point3f from the tuple t1. The final con-
structor generates a point with the value of (0.0, 0.0, 0.0).
Methods
The distance method computes the Euclidean distance between this point and
the point p1 and returns the result. The distanceSquared method computes the
square of the Euclidean distance between this point and the point p1 and returns
the result.
This method computes the L1 (Manhattan) distance between this point and point
p1. The L1 distance is equal to
This method computes the L∞ distance between this point and point p1. The L∞
distance is equal to
This method multiplies each of the x, y, and z components of the Point4f param-
eter p1 by 1/w and places the projected values into this point.
Version 1.2, March 2000 387
A.1.5 Tuple3f Class MATH OBJECTS
Constructors
These seven constructors each return a new Vector3f. The first constructor gener-
ates a Vector3f from three floating-point numbers x, y, and z. The second con-
structor generates a Vector3f from the first three elements of array v. The third
constructor generates a Vector3f from the double-precision Vector3d v1. The
fourth constructor generates a Vector3f from the single-precision Vector3f v1.
The fifth and sixth constructors generate a Vector3f from the tuple t1. The final
constructor generates a Vector3f with the value of (0.0, 0.0, 0.0).
Methods
The length method computes the length of the vector this and returns its length
as a single-precision floating-point number. The lengthSquared method com-
putes the square of the length of the vector this and returns its length as a sin-
gle-precision floating-point number.
The cross method computes the vector cross-product of v1 and v2 and places
the result in this.
The dot method computes the dot product between this vector and the vector v1
and returns the resulting value.
The first normalize method normalizes the vector v1 to unit length and places
the result in this. The second normalize method normalizes the vector this
and places the resulting unit vector back into this.
This method returns the angle, in radians, between this vector and the vector
parameter. The return value is constrained to the range [0, π].
Constructors
These six constructors each return a new TexCoord3f. The first constructor gen-
erates a texture coordinate from three floating-point numbers x, y, and z. The
second constructor generates a texture coordinate from the first three elements of
array v. The third constructor generates a texture coordinate from the single-pre-
cision TexCoord3f v1. The fourth and fifth constructors generate a texture coor-
dinate from tuple t1. The final constructor generates a texture coordinate with
the value of (0.0, 0.0, 0.0).
Constructors
These six constructors each return a new Color3f. The first constructor generates
a Color3f from three floating-point numbers x, y, and z. The second constructor
(Color3f(float v[]) generates a Color3f from the first three elements of array
v. The third constructor generates a Color3f from the single-precision color v1.
The fourth and fifth constructors generate a Color3f from the tuple t1. The sixth
constructor generates a Color3f from the specified AWT Color object. The final
constructor generates a Color3f with the value of (0.0, 0.0, 0.0).
Methods
The set method sets the R,G,B values of this Color3f object to those of the spec-
ified AWT Color object. The get method returns a new AWT Color object initial-
ized with the R,G,B values of this Color3f object.
Variables
The component values of a Tuple3i are directly accessible through the public
variables x, y, and z. To access the x component of a Tuple3i called upperLeft-
Corner, a programmer would write upperLeftCorner.x. The programmer
would access the y and z components similarly.
Constructors
These four constructors each return a new Tuple3i. The first constructor gener-
ates a Tuple3i from the specified x, y, and z coordinates. The second constructor
generates a Tuple3i from the array of length 3. The third constructor generates a
Tuple3i from the specified Tuple3i. The final constructor generates a Tuple3i
with the value of (0,0,0).
Methods
This method returns a string that contains the values of this Tuple3i.
The first set method sets the value of this tuple to the specified x, y, and z coor-
dinates. The second set method sets the value of this tuple to the specified coor-
dinates in the array of length 3. The third set method sets the value of this tuple
to the value of tuple t1. The first get method copies the values of this tuple into
the array t. The second get method copies the values of this tuple into the tuple t.
The first method sets the value of this tuple to the sum of tuples t1 and t2. The
second method sets the value of this tuple to the sum of itself and t1.
The first method sets the value of this tuple to the difference of tuples t1 and t2
(this = t1 – t2). The second method sets the value of this tuple to the difference
of itself and t1 (this = this – t1).
The first method sets the value of this tuple to the negation of tuple t1. The sec-
ond method negates the value of this tuple in place.
The first method sets the value of this tuple to the scalar multiplication of tuple
t1. The second method sets he value of this tuple to the scalar multiplication of
the scale factor with this.
public final void scaleAdd(int s, Tuple3i t1, Tuple3i t2) New in 1.2
public final void scaleAdd(int s, Tuple3i t1) New in 1.2
The first method sets the value of this tuple to the scalar multiplication of tuple
t1 plus tuple t2 (this = s*t1 + t2). The second method sets the value of this tuple
to the scalar multiplication of itself and then adds tuple t1 (this = s*this + t1).
This method returns true if the Object t1 is of type Tuple3i and all of the data
members of t1 are equal to the corresponding data members in this Tuple3i.
public final void clamp(int min, int max, Tuple3i t) New in 1.2
public final void clamp(int min, int max) New in 1.2
The first method clamps the tuple parameter to the range [low, high] and places
the values into this tuple. The second method clamps this tuple to the range [low,
high].
public final void clampMin(int min, Tuple3i t) New in 1.2
public final void clampMin(int min) New in 1.2
public final void clampMax(int max, Tuple3i t) New in 1.2
public final void clampMax(int max) New in 1.2
The first method clamps the minimum value of the tuple parameter to the min
parameter and places the values into this tuple. The second method clamps the
minimum value of this tuple to the min parameter. The third method clamps the
maximum value of the tuple parameter to the max parameter and places the val-
ues into this tuple. The final method clamps the maximum value of this tuple to
the max parameter.
The first method sets each component of the tuple parameter to its absolute value
and places the modified values into this tuple. The second method sets each com-
ponent of this tuple to its absolute value.
This method returns a hash code value based on the data values in this object.
Two different Tuple3i objects with identical data values (i.e., Tuple3i.equals
returns true) will return the same hash code value. Two objects with different
data members may return the same hash value, although this is not likely.
The Point3i class extends Tuple3i. The Point3i is a three-element point repre-
sented by signed integer x,y,z coordinates.
Constructors
These four constructors each return a new Point3i. The first constructor generates
a Point3i from the specified x, y, and z coordinates. The second constructor gen-
erates a Point3i from the array of length 3. The third constructor generates a
Point3i from the specified Tuple3i. The final constructor generates a Point3i with
the value of (0,0,0).
If intValue is greater than 127, then byteVariable will be negative. The correct
value will be extracted when it is used (by masking off the upper bits).
Variables
The component values of a Tuple4b are directly accessible through the public
variables x, y, z, and w. The x, y, z, and w values represent the red, green, blue,
and alpha values, respectively. To access the x (red) component of a Tuple4b
called backgroundColor, a programmer would write backgroundColor.x. The
programmer would access the y (green), z (blue), and w (alpha) components sim-
ilarly.
public byte x
public byte y
public byte z
public byte w
Constructors
These four constructors each return a new Tuple4b. The first constructor gener-
ates a Tuple4b from four bytes b1, b2, b3, and b4. The second constructor
(Tuple4b(byte t[]) generates a Tuple4b from the first four elements of array t.
The third constructor generates a Tuple4b from the byte-precision Tuple4b t1.
The final constructor generates a Tuple4b with the value of (0.0, 0.0, 0.0, 0.0).
Methods
This method returns a string that contains the values of this Tuple4b.
The first set method sets the value of the data members of this Tuple4b to the
value of the array b. The second set method sets the value of the data members
of this Tuple4b to the value of the argument tuple t1. The first get method
places the values of the x, y, z, and w components of this Tuple4b into the byte
array b. The second get method places the values of the x, y, z, and w compo-
nents of this Tuple4b into the Tuple4b t1.
The first method returns true if all of the data members of Tuple4b t1 are equal
to the corresponding data members in this Tuple4b. The second method returns
true if the Object t1 is of type Tuple4b and all of the data members of t1 are
equal to the corresponding data members in this Tuple4b.
This method returns a hash number based on the data values in this object. Two
different Tuple4b objects with identical data values (that is, equals(Tuple4b)
returns true) will return the same hash number. Two Tuple4b objects with differ-
ent data members may return the same hash value, although this is not likely.
Constructors
These five constructors each return a new Color4b. The first constructor gener-
ates a Color4b from four bytes b1, b2, b3, and b4. The second constructor gener-
ates a Color4b from the first four elements of byte array c. The third constructor
generates a Color4b from the byte-precision Color4b c1. The fourth constructor
generates a Color4b from the tuple t1. The fifth constructor generates a Color4b
from the specified AWT Color object. The final constructor generates a Color4b
with the value of (0.0, 0.0, 0.0, 0.0).
Methods
The set method sets the R,G,B,A values of this Color4b object to those of the
specified AWT Color object. The get method returns a new AWT Color object
initialized with the R,G,B,A values of this Color4b object.
Variables
The component values of a Tuple4d are directly accessible through the public
variables x, y, z, and w. To access the x component of a Tuple4d called upper-
LeftCorner, a programmer would write upperLeftCorner.x. The programmer
would access the y, z, and w components similarly.
public double x
public double y
public double z
public double w
Constructors
These five constructors each return a new Tuple4d. The first constructor gener-
ates a Tuple4d from four floating-point numbers x, y, z, and w. The second con-
structor (Tuple4d(double t[]) generates a Tuple4d from the first four elements
of array t. The third constructor generates a Tuple4d from the double-precision
tuple t1. The fourth constructor generates a Tuple4d from the single-precision
tuple t1. The final constructor generates a Tuple4d with the value of (0.0, 0.0,
0.0, 0.0).
Methods
These methods set the value of the tuple this to the values specified or to the
values of the specified tuples. The first get method retrieves the value of this
tuple and places it into the array t of length four, in x, y, z, w order. The second
get method retrieves the value of this tuple and places it into tuple t.
The first add method computes the element-by-element sum of the tuple t1 and
the tuple t2, placing the result in this. The second add method computes the
element-by-element sum of this tuple and the tuple t1 and places the result in
this. The first sub method performs an element-by-element subtraction of tuple
t2 from tuple t1 and places the result in this. The second sub method performs
an element-by-element subtraction of tuple t1 from this tuple and places the
result in this.
The first negate method sets the values of this tuple to the negative of the values
from tuple t1. The second negate method negates the tuple this and places the
resulting tuple back into this.
The first scale method multiplies each element of the tuple t1 by the scale fac-
tor s and places the resulting scaled tuple into this. The second scale method
multiples the tuple this by the scale factor s and replaces this with the scaled
value. The first scaleAdd method scales this tuple by the scale factor s, adds the
result to tuple t1, and places the result into tuple this (this = s*this + t1). The
second scaleAdd method scales the tuple t1 by the scale factor s, adds the result
to the tuple t2, and places the result into the tuple this (this = s*t1 + t2).
The first interpolate method linearly interpolates between tuples t1 and t2 and
places the result into this tuple (this = (1 – alpha) * t1 + alpha * t2). The second
interpolate method linearly interpolates between this tuple and tuple t1 and
places the result into this tuple (this = (1 – alpha) * this + alpha * t1).
This method returns a string that contains the values of this tuple. The form is
(x, y, z, w).
The first method returns true if all of the data members of tuple v1 are equal to
the corresponding data members in this tuple. The second method returns true if
the Object t1 is of type Tuple4d and all of the data members of t1 are equal to
the corresponding data members in this Tuple4d.
This method returns true if the L∞ distance between this Tuple4d and Tuple4d
t1 is less than or equal to the epsilon parameter. Otherwise, this method returns
false. The L∞ distance is equal to
The first absolute method sets each component of this tuple to its absolute
value. The second absolute method sets each component of this tuple to the
absolute value of the corresponding component in tuple t.
The first clamp method clamps this tuple to the range [min, max]. The second
clamp method clamps this tuple to the range [min, max] and places the values
into tuple t. The first clampMin method clamps the minimum value of this tuple
to the min parameter. The second clampMin method clamps the minimum value
of this tuple to the min parameter and places the values into the tuple t. The first
clampMax method clamps the maximum value of this tuple to the max parameter.
The second clampMax method clamps the maximum value of this tuple to the max
parameter and places the values into the tuple t.
This method returns a hash number based on the data values in this object. Two
different Tuple4d objects with identical data values (that is, equals(Tuple4d)
returns true) will return the same hash number. Two Tuple4d objects with differ-
ent data members may return the same hash value, although this is not likely.
Constructors
These eight constructors each return a new Point4d. The first constructor gener-
ates a Point4d from four floating-point numbers x, y, z, and w. The second con-
Version 1.2, March 2000 399
A.1.8 Tuple4d Class MATH OBJECTS
structor (Point4d(double p[]) generates a Point4d from the first four elements
of array p. The third constructor generates a Point4d from the double-precision
point p1. The fourth constructor generates a Point4d from the single-precision
point p1. The fifth and sixth constructors generate a Point4d from tuple t1. The
seventh constructor generates a Point4d from the specified Tuple3d – the w com-
ponent of this point is set to 1. The final constructor generates a Point4d with the
value of (0.0, 0.0, 0.0, 0.0).
Methods
This method sets the x,y,z components of this point to the corresponding compo-
nents of tuple t1. The w component of this point is set to 1.
The distance method computes the Euclidean distance between this point and
the point p1 and returns the result. The distanceSquared method computes the
square of the Euclidean distance between this point and the point p1 and returns
the result.
This method computes the L1 (Manhattan) distance between this point and point
p1. The L1 distance is equal to
This method computes the L∞ distance between this point and point p1. The L∞
distance is equal to
Constructors
These eight constructors each return a new Vector4d. The first constructor gener-
ates a Vector4d from four floating-point numbers x, y, z, and w. The second con-
structor generates a Vector4d from the first four elements of array v. The third
constructor generates a Vector4d from the double-precision Vector4d v1. The
fourth constructor generates a Vector4d from the single-precision Vector4f v1.
The fifth and sixth constructors generate a Vector4d from tuple t1. The seventh
constructor generates a Vector4d from the specified Tuple3d – the w component
of this vector is set to 0. The final constructor generates a Vector4d with the
value of (0.0, 0.0, 0.0, 0.0).
Methods
This method sets the x,y,z components of this vector to the corresponding com-
ponents of tuple t1. The w component of this vector is set to 0.
The length method computes the length of the vector this and returns its length
as a double-precision floating-point number. The lengthSquared method com-
putes the square of the length of the vector this and returns its length as a dou-
ble-precision floating-point number.
This method returns the dot product of this vector and vector v1.
The first normalize method normalizes the vector v1 to unit length and places
the result in this. The second normalize method normalizes the vector this
and places the resulting unit vector back into this.
This method returns the (four-space) angle, in radians, between this vector and
the vector v1 parameter. The return value is constrained to the range [0, π].
Constructors
These seven constructors each return a new Quat4d. The first constructor gener-
ates a quaternion from four floating-point numbers x, y, z, and w. The second
constructor generates a quaternion from the first four elements of array q of
length four. The third constructor generates a quaternion from the double-preci-
sion quaternion q1. The fourth constructor generates a quaternion from the sin-
gle-precision quaternion q1. The fifth and sixth constructors generate a Quat4d
from tuple t1. The final constructor generates a quaternion with the value of
(0.0, 0.0, 0.0, 0.0).
Methods
The first conjugate method sets the values of this quaternion to the conjugate of
quaternion q1. The second conjugate method negates the value of each of this
quaternion’s x, y, and z coordinates in place.
The first mul method sets the value of this quaternion to the quaternion product
of quaternions q1 and q2 (this = q1 * q2). Note that this is safe for aliasing (that
is, this can be q1 or q2). The second mul method sets the value of this quater-
nion to the quaternion products of itself and q1 (this = this * q1).
The first inverse method sets the value of this quaternion to the quaternion
inverse of quaternion q1. The second inverse method sets the value of this
quaternion to the quaternion inverse of itself.
The first normalize method sets the value of this quaternion to the normalized
value of quaternion q1. The second normalize method normalizes the value of
this quaternion in place.
These set methods set the value of this quaternion to the rotational component
of the passed matrix.
The first method performs a great circle interpolation between this quaternion
and the quaternion parameter and places the result into this quaternion. The sec-
ond method performs a great circle interpolation between quaternion q1 and
quaternion q2 and places the result into this quaternion.
Variables
The component values of a Tuple4f are directly accessible through the public
variables x, y, z, and w. To access the x component of a Tuple4f called upper-
LeftCorner, a programmer would write upperLeftCorner.x. The programmer
would access the y, z, and w components similarly.
public double x
public double y
public double z
public double w
Constructors
These five constructors each return a new Tuple4f. The first constructor generates
a Tuple4f from four floating-point numbers x, y, z, and w. The second constructor
(Tuple4f(float t[]) generates a Tuple4f from the first four elements of array
t. The third constructor generates a Tuple4f from the double-precision tuple t1.
The fourth constructor generates a Tuple4f from the single-precision tuple t1.
The final constructor generates a Tuple4f with the value of (0.0, 0.0, 0.0, 0.0).
Methods
The first set method sets the value of this tuple to the specified x, y, z, and w val-
ues. The second set method sets the value of this tuple to the specified coordi-
nates in the array. The next two methods set the value of tuple this to the value
of tuple t1. The get methods copy the value of this tuple into the tuple t.
The first add method computes the element-by-element sum of tuples t1 and t2
and places the result in this. The second add method computes the ele-
ment-by-element sum of this tuple and tuple t1 and places the result in this.
The first sub method performs the element-by-element subtraction of tuple t2
from tuple t1 and places the result in this (this = t1 – t2). The second sub
method performs the element-by-element subtraction of tuple t1 from this tuple
and places the result in this (this = this – t1).
The first negate method sets the values of this tuple to the negative of the values
from tuple t1. The second negate method negates the tuple this and places the
resulting tuple back into this.
The first scale method multiplies each element of the tuple t1 by the scale fac-
tor s and places the resulting scaled tuple into this. The second scale method
multiples the tuple this by the scale factor s, replacing this with the scaled
value. The first scaleAdd method scales this tuple by the scale factor s, adds the
result to tuple t1, and places the result into tuple this (this = s*this + t1). The
second scaleAdd method scales the tuple t1 by the scale factor s, adds the result
to the tuple t2, and places the result into the tuple this (this = s*t1 + t2).
This method returns a string that contains the values of this Tuple4f. The form is
(x, y, z, w).
The first method returns true if all of the data members of Tuple4f t1 are equal
to the corresponding data members in this Tuple4f. The second method returns
true if the Object t1 is of type Tuple4f and all of the data members of t1 are
equal to the corresponding data members in this Tuple4f.
This method returns true if the L∞ distance between this Tuple4f and Tuple4f t1
is less than or equal to the epsilon parameter. Otherwise, this method returns
false. The L∞ distance is equal to
The first absolute method sets each component of this tuple to its absolute
value. The second absolute method sets each component of this tuple to the
absolute value of the corresponding component in tuple t.
The first clamp method clamps this tuple to the range [min, max]. The second
clamp method clamps this tuple to the range [min, max] and places the values
into tuple t. The first clampMin method clamps the minimum value of this tuple
to the min parameter. The second clampMin method clamps the minimum value
of this tuple to the min parameter and places the values into the tuple t. The first
clampMax method clamps the maximum value of this tuple to the max parameter.
The second clampMax method clamps the maximum value of this tuple to the max
parameter and places the values into the tuple t.
The first interpolate method linearly interpolates between tuples t1 and t2 and
places the result into this tuple (this = (1 – alpha) * t1 + alpha * t2). The second
interpolate method linearly interpolates between this tuple and tuple t1 and
places the result into this tuple (this = (1 – alpha) * this + alpha * t1).
This method returns a hash number based on the data values in this object. Two
different Tuple4f objects with identical data values (that is, equals(Tuple4f)
returns true) will return the same hash number. Two Tuple4f objects with differ-
ent data members may return the same hash value, although this is not likely.
Constructors
These eight constructors each return a new Point4f. The first constructor gener-
ates a Point4f from four floating-point numbers x, y, z, and w. The second con-
structor (Point4f(float p[]) generates a Point4f from the first four elements of
array p. The third constructor generates a Point4f from the double-precision
point p1. The fourth constructor generates a Point4f from the single-precision
point p1. The fifth and sixth constructors generate a Point4f from tuple t1. The
seventh constructor generates a Point4f from the specified Tuple3f – The w com-
ponent of this point is set to 1. The final constructor generates a Point4f with the
value of (0.0, 0.0, 0.0, 0.0).
Methods
This method sets the x,y,z components of this point to the corresponding compo-
nents of tuple t1. The w component of this point is set to 1.
This method computes the L1 (Manhattan) distance between this point and point
p1. The L1 distance is equal to
This method computes the L∞ distance between this point and point p1. The L∞
distance is equal to
Constructors
These seven constructors each return a new Color4f. The first constructor gener-
ates a Color4f from four floating-point numbers x, y, z, and w. The second con-
structor generates a Color4f from the first four elements of array c. The third
constructor generates a Color4f from the single-precision color c1. The fourth
and fifth constructors generate a Color4f from tuple t1. The sixth constructor
generates a Color4f from the specified AWT Color object. The final constructor
generates a Color4f with the value of (0.0, 0.0, 0.0, 0.0).
Methods
The set method sets the R,G,B,A values of this Color4f object to those of the
specified AWT Color object. The get method returns a new AWT Color object
initialized with the R,G,B,A values of this Color4f object.
Constructors
These eight constructors each return a new Vector4f. The first constructor gener-
ates a Vector4f from four floating-point numbers x, y, z, and w. The second con-
structor generates a Vector4f from the first four elements of array v. The third
constructor generates a Vector4f from the double-precision Vector4d v1. The
fourth constructor generates a Vector4f from the single-precision Vector4f v1.
The fifth and sixth constructors generate a Vector4f from tuple t1. The seventh
constructor generates a Vector4f from the specified Tuple3f – the w component
of this vector is set to 0. The final constructor generates a Vector4f with the value
of (0.0, 0.0, 0.0, 0.0).
Methods
This method sets the x,y,z components of this vector to the corresponding com-
ponents of tuple t1. The w component of this vector is set to 0.
The length method computes the length of the vector this and returns its length
as a single-precision floating-point number. The lengthSquared method com-
putes the square of the length of the vector this and returns its length as a sin-
gle-precision floating-point number.
The dot method computes the dot product between this vector and the vector v1
and returns the resulting value.
The first normalize method sets the value of this vector to the normalization of
vector v1. The second normalize method normalizes this vector in place.
This method returns the (four-space) angle, in radians, between this vector and
the vector v1 parameter. The return value is constrained to the range [0, π].
Constructors
These seven constructors each return a new Quat4f. The first constructor gener-
ates a quaternion from four floating-point numbers x, y, z, and w. The second
constructor generates a quaternion from the four floating-point numbers of array
q of length four. The third constructor generates a quaternion from the dou-
ble-precision quaternion q1. The fourth constructor generates a quaternion from
the single-precision quaternion q1. The fifth and sixth constructors generate a
quaternion from tuple t1. The final constructor generates a quaternion with the
value of (0.0, 0.0, 0.0, 0.0).
Methods
The first conjugate method sets the value of this quaternion to the conjugate of
quaternion q1. The second conjugate method sets the value of this quaternion to
the conjugate of itself.
The first mul method sets the value of this quaternion to the quaternion product
of quaternions q1 and q2 (this = q1 * q2). Note that this is safe for aliasing (that
is, this can be q1 or q2). The second mul method sets the value of this quater-
nion to the quaternion product of itself and q1 (this = this * q1).
The first inverse method sets the value of this quaternion to the quaternion
inverse of quaternion q1. The second inverse method sets the value of this
quaternion to the quaternion inverse of itself.
The first normalize method sets the value of this quaternion to the normalized
value of quaternion q1. The second normalize method normalizes the value of
this quaternion in place.
These set methods set the value of this quaternion to the rotational component
of the passed matrix.
The first method performs a great circle interpolation between this quaternion
and quaternion q1 and places the result into this quaternion. The second method
performs a great circle interpolation between quaternion q1 and quaternion q2
and places the result into this quaternion.
Variables
The component values of a Tuple4i are directly accessible through the public
variables x, y, z, and w. To access the x component of a Tuple4i called upper-
LeftCorner, a programmer would write upperLeftCorner.x. The programmer
would access the y, z, and w components similarly.
Constructors
These four constructors each return a new Tuple4i. The first constructor gener-
ates a Tuple4i from the specified x, y, z, and w coordinates. The second construc-
tor generates a Tuple4i from the array of length 4. The third constructor
generates a Tuple4i from the specified Tuple4i The final constructor generates a
Tuple4i with the value of (0,0,0,0).
Methods
The first set method sets the value of this tuple to the specified x, y, z, and w
coordinates. The second set method sets the value of this tuple to the specified
coordinates in the array of length 4. The third set method sets the value of this
tuple to the value of tuple t1. The first get method copies the values of this tuple
into the array t. The second get method copies the values of this tuple into the
tuple t.
The first method sets the value of this tuple to the sum of tuples t1 and t2. The
second method sets the value of this tuple to the sum of itself and t1
The first method sets the value of this tuple to the difference of tuples t1 and t2
(this = t1 – t2). The second method sets the value of this tuple to the difference
of itself and t1 (this = this – t1).
The first method sets the value of this tuple to the negation of tuple t1. The sec-
ond method negates the value of this tuple in place.
Version 1.2, March 2000 413
A.1.10 Tuple4i Class MATH OBJECTS
The first method sets the value of this tuple to the scalar multiplication of tuple
t1. The second method sets the value of this tuple to the scalar multiplication of
the scale factor with this.
public final void scaleAdd(int s, Tuple4i t1, Tuple4i t2) New in 1.2
public final void scaleAdd(int s, Tuple4i t1) New in 1.2
The first method sets the value of this tuple to the scalar multiplication of tuple
t1 plus tuple t2 (this = s*t1 + t2). The second method sets the value of this tuple
to the scalar multiplication of itself and then adds tuple t1 (this = s*this + t1).
public final void clamp(int min, int max, Tuple4i t) New in 1.2
public final void clamp(int min, int max) New in 1.2
The first method clamps the tuple parameter to the range [low, high] and places
the values into this tuple. The second method clamps this tuple to the range [low,
high].
public final void clampMin(int min, Tuple4i t) New in 1.2
public final void clampMin(int min) New in 1.2
The first method clamps the minimum value of the tuple parameter to the min
parameter and places the values into this tuple. The second method clamps the
minimum value of this tuple to the min parameter.
public final void clampMax(int max, Tuple4i t) New in 1.2
public final void clampMax(int max) New in 1.2
The first method clamps the maximum value of the tuple parameter to the max
parameter and places the values into this tuple. The second method clamps the
maximum value of this tuple to the max parameter.
The first method sets each component of the tuple parameter to its absolute value
and places the modified values into this tuple. The second method sets each com-
ponent of this tuple to its absolute value.
This method returns a string that contains the values of this Tuple4i.
This method returns a hash code value based on the data values in this object.
Two different Tuple4i objects with identical data values (i.e., Tuple4i.equals
returns true) will return the same hash code value. Two objects with different
data members may return the same hash value, although this is not likely.
The Point4i class extends Tuple4i. The Point4i is a four-element point repre-
sented by signed integer x,y,z,w coordinates.
Constructors
These four constructors each return a Point4i. The first constructor generates a
Point4i from the specified x, y, z, and w coordinates. The second constructor
generates a Point4i from the array of length 4. The third constructor generates a
Point4i from the specified Tuple4i. The final constructor generates a Point4i with
the value of (0,0,0,0).
Variables
The component values of an AxisAngle4d are directly accessible through the
public variables x, y, z, and angle. To access the x component of an
AxisAngle4d called myRotation, a programmer would write myRotation.x. The
programmer would access the y, z, and angle components similarly.
public double x
public double y
public double z
public double angle
The x, y, and z coordinates and the rotational angle, respectively. The rotation
angle is expressed in radians.
Constructors
These six constructors each return a new AxisAngle4d. The first constructor gen-
erates an axis-angle from four floating-point numbers x, y, z, and angle. The
second constructor generates an axis-angle from the first four elements of array
a. The third constructor generates an axis-angle from the double-precision
axis-angle a1. The fourth constructor generates an axis-angle from the sin-
gle-precision axis-angle a1. The fifth constructor generates an axis-angle from
the specified axis and angle. The final constructor generates an axis-angle with
the value of (0.0, 0.0, 1.0, 0.0).
Methods
The first set method sets the value of this axis-angle to the specified x, y, z, and
angle coordinates. The second set method sets the value of this axis-angle to
the specified x,y,z angle. The next four set methods set the value of this
axis-angle to the rotational component of the passed matrix m1. The next two set
methods set the value of this axis-angle to the value of axis-angle a1. The next
two set methods set the value of this axis-angle to the value of the passed
quaternion q1. The last set method sets the value of this axis-angle to the speci-
fied axis and angle. The get method retrieves the value of this axis-angle and
places it into the array a of length four in x,y,z,angle order.
This method returns a string that contains the values of this AxisAngle4d. The
form is (x, y, z, angle).
The first method returns true if all of the data members of AxisAngle4d v1 are
equal to the corresponding data members in this axis-angle. The second method
returns true if the Object o1 is of type AxisAngle4d and all of the data members
of o1 are equal to the corresponding data members in this AxisAngle4d.
This method returns true if the L∞ distance between this axis-angle and
axis-angle a1 is less than or equal to the epsilon parameter. Otherwise, this
method returns false. The L∞ distance is equal to
This method returns a hash number based on the data values in this object. Two
different AxisAngle4d objects with identical data values (that is,
equals(AxisAngle4d) returns true) will return the same hash number. Two
AxisAngle4d objects with different data members may return the same hash
value, although this is not likely.
Variables
The component values of an AxisAngle4f are directly accessible through the
public variables x, y, z, and angle. To access the x component of an
public float x
public float y
public float z
public float angle
The x, y, and z coordinates and the rotational angle, respectively. The rotation
angle is expressed in radians.
Constructors
These six constructors each return a new AxisAngle4f. The first constructor gen-
erates an axis-angle from four floating-point numbers x, y, z, and angle. The
second constructor generates an axis-angle from the first four elements of array
a. The third constructor generates an axis-angle from the single-precision
axis-angle a1. The fourth constructor generates an axis-angle from the dou-
ble-precision axis-angle a1. The fifth constructor generates an axis-angle from
the specified axis and angle. The final constructor generates an axis-angle with
the value of (0.0, 0.0, 1.0, 0.0).
Methods
The first set method sets the value of this axis-angle to the specified x, y, z, and
angle coordinates. The second set method sets the value of this axis-angle to
the specified coordinates in the array a. The next four set methods set the value
of this axis-angle to the rotational component of the passed matrix m1. The next
two set methods set the value of this axis-angle to the value of axis-angle a1.
The next two set methods set the value of this axis-angle to the value of the
passed quaternion q1. The last set method sets the value of this axis-angle to the
specified axis and angle. The get method retrieves the value of this axis-angle
and places it into the array a of length four in x,y,z,angle order.
This method returns a string that contains the values of this axis-angle. The form
is (x, y, z, angle).
The first method returns true if all of the data members of axis-angle a1 are
equal to the corresponding data members in this axis-angle. The second method
returns true if the Object o1 is of type AxisAngle4f and all of the data members
of o1 are equal to the corresponding data members in this AxisAngle4f.
This method returns true if the L∞ distance between this axis-angle and
axis-angle a1 is less than or equal to the epsilon parameter. Otherwise, this
method returns false. The L∞ distance is equal to
This method returns a hash number based on the data values in this object. Two
different AxisAngle4f objects with identical data values (that is,
equals(AxisAngle4f) returns true) will return the same hash number. Two
AxisAngle4f objects with different data members may return the same hash
value, although this is not likely.
Constructors
These nine constructors each return a new GVector. The first constructor gener-
ates a generalized mathematical vector with all elements set to 0.0: length rep-
resents the number of elements in the vector. The second and third constructors
generate a generalized mathematical vector and copy the initial value from the
parameter vector. The next four constructors generate a generalized mathemati-
cal vector and copy the initial value from the tuple parameter tuple. The final
method generates a generalized mathematical vector by copying length ele-
ments from the array parameter. The array must contain at least length elements
(i.e., vector.length ≥ length. The length of this new GVector is set to the
specified length.
Methods
The first add method computes the element-by-element sum of this GVector and
GVector v1 and places the result in this. The second add method computes the
element-by-element sum of GVectors v1 and v2 and places the result in this.
The first sub method performs the element-by-element subtraction of GVector v1
from this GVector and places the result in this (this = this – v1). The second
sub method performs the element-by-element subtraction of GVector v2 from
GVector v1 and places the result in this (this = v1 – v2).
The first mul method multiplies matrix m1 times vector v1 and places the result
into this vector (this = m1 * v1). The second mul method multiplies the transpose
of vector v1 (that is, v1 becomes a row vector with respect to the multiplication)
times matrix m1 and places the result into this vector (this = transpose(v1) * m1).
The result is technically a row vector, but the GVector class only knows about
column vectors, so the result is stored as a column vector.
This method negates the vector this and places the resulting vector back into
this.
This method changes the size of this vector dynamically. If the size is increased,
no data values are lost. If the size is decreased, only those data values whose vec-
tor positions were eliminated are lost.
The first set method sets the values of this vector to the values found in the array
v: The array should be at least equal in length to the number of elements in the
vector. The second set method sets the values of this vector to the values in vec-
tor v. The last 5 set methods set the value of this vector to the values in tuple t.
These methods set and retrieve the specified index value of this vector.
The norm method returns the square root of the sum of the squares of this vector
(its length in n-dimensional space). The normSquared method returns the sum of
the squares of this vector (its length in n-dimensional space).
The first normalize method sets the value of this vector to the normalization of
vector v1. The second normalize method normalizes this vector in place.
The first scale method sets the value of this vector to the scalar multiplication of
the scale factor s with the vector v1. The second scale method scales this vector
by the scale factor s. The scaleAdd method scales the vector v1 by the scale fac-
tor s, adds the result to the vector v2, and places the result into this vector
(this = s*v1 + v2).
This method returns a string that contains the values of this vector.
This method returns a hash number based on the data values in this object. Two
different GVector objects with identical data values (that is, equals(GVector)
returns true) will return the same hash number. Two objects with different data
members may return the same hash value, although this is not likely.
The first method returns true if all of the data members of GVector vector1 are
equal to the corresponding data members in this GVector. The second method
returns true if the Object o1 is of type GMatrix and all of the data members of o1
are equal to the corresponding data members in this GMatrix.
This method returns true if the L∞ distance between this vector and vector v1 is
less than or equal to the epsilon parameter. Otherwise, this method returns
false. The L∞ distance is equal to
This method returns the dot product of this vector and vector v1.
This method returns the (n-space) angle, in radians, between this vector and the
vector v1 parameter . The return value is constrained to the range [0, π].
The first method linearly interpolates between vectors v1 and v2 and places the
result into this vector (this = (1 – alpha) * v1 + alpha * v2). The second method
linearly interpolates between this vector and vector v1 and places the result into
this vector (this = (1 – alpha) * this + alpha * v1).
and store the result in a third, a Java 3D application or applet would write
matrix3.mul(matrix1, matrix2). Here matrix3 receives the results of multi-
plying matrix1 with matrix2.
The Java 3D model for 3 × 3 transformations is
Variables
The component values of a Matrix3f are directly accessible through the public
variables m00, m01, m02, m10, m11, m12, m20, m21, and m22. To access the element
in row 2 and column 0 of matrix rotate, a programmer would write
rotate.m20. A programmer would access the other values similarly.
Constructors
These constructors each return a new Matrix3f object. The first constructor gen-
erates a 3 × 3 matrix from the nine values provided. The second constructor gen-
erates a 3 × 3 matrix from the first nine values in the array v. The third and fourth
constructors generate a new matrix with the same values as the passed matrix m1.
The final constructor generates a 3 × 3 matrix with all nine values set to 0.0.
Methods
These two set methods set the value of the matrix this to the matrix conversion
of the quaternion argument q1.
These two set methods set the value of the matrix this to the matrix conversion
of the axis and angle argument a1.
The first method sets the value of this matrix to a scale matrix with the passed
scale amount. The second method sets the values of this matrix to the
row-major array parameter (that is, the first three elements of the array are cop-
ied into the first row of this matrix, and so forth).
The setElement and getElement methods provide a means for accessing a sin-
gle element within a 3 × 3 matrix using indices. This is not a preferred method of
access, but Java 3D provides these methods for functional completeness. The
setElement method takes a row index row (where a value of 0 represents the
first row and a value of 2 represents the third row), a column index column
(where a value of 0 represents the first column and a value of 2 represents the
third column), and a value. It sets the corresponding element in matrix this to
the specified value. The getElement method also takes a row index row and a
column index column. It returns the element at the corresponding locations as a
floating-point value.
The first add method adds the matrix m1 to the matrix m2 and places the result
into the matrix this. The second add method adds the matrix this to the matrix
m1 and places the result into the matrix this. The first sub method performs an
element-by-element subtraction of matrix m2 from matrix m1 and places the result
into the matrix this. The second sub method performs an element-by-element
subtraction of the matrix m1 from the matrix this and places the result into the
matrix this.
The first method multiplies this matrix by the tuple t and places the result back
into the tuple (t = this*t). The second method multiplies this matrix by the tuple
t and places the result into the tuple result (result = this*t).
The first method transposes this matrix in place. The second method sets the
value of this matrix to the transpose of the matrix m1.
The first method inverts this matrix in place. The second method sets the value of
this matrix to the inverse of the matrix m1.
The determinant method computes the determinant of the matrix this and
returns the computed value.
The three rot methods construct rotation matrices that rotate in a counter-clock-
wise (right-handed) direction around the axis specified as the last letter of the
method name. The constructed matrix replaces the value of the matrix this. The
rotation angle is expressed in radians.
The first mul method multiplies matrix m1 with matrix m2 and places the result
into the matrix this. The second mul method multiplies the matrix this with the
matrix m1 and places the result into matrix this.
The first mulNormalize method multiplies this matrix by matrix m1, performs an
SVD normalization of the result, and places the result back into this matrix (this
= SVDnorm(this ⋅ m1)). The second mulNormalize method multiplies matrix m1
by matrix m2, performs an SVD normalization of the result, and places the result
into this matrix (this = SVDnorm(m1 ⋅ m2)).
The first method returns true if all of the data members of Matrix3f m1 are equal
to the corresponding data members in this Matrix3f. The second method returns
true if the Object o1 is of type Matrix3f and all of the data members of o1 are
equal to the corresponding data members in this Matrix3f.
This method returns true if the L∞ distance between this Matrix3f and Matrix3f
m1 is less than or equal to the epsilon parameter. Otherwise, this method
returns false. The L∞ distance is equal to
MAX[i = 0,1,2, ... n; j = 0,1,2,... n; abs(this.m(i,j) – m1.m(i,j)]
The first method negates the value of this matrix in place (this = –this). The sec-
ond method sets the value of this matrix equal to the negation of the matrix m1
(this = –m1).
This method sets the scale component of the current matrix by factoring out the
current scale (by doing an SVD) and multiplying by the new scale.
This method adds a scalar to each component of the matrix m1 and places the
result into this. Matrix m1 is not modified.
This method multiplies each component of the matrix m1 by a scalar and places
the result into this. Matrix m1 is not modified.
The first method multiplies this matrix by the tuple t and places the result back
into the tuple (t = this*t). The second method multiplies this matrix by the
tuple t and places the result into the tuple result (result = this*t).
The hashCode method returns a hash number based on the data values in this
object. Two different Matrix3f objects with identical data values (that is,
equals(Matrix3f) returns true) will return the same hash number. Two
Matrix3f objects with different data members may return the same hash value,
although this is not likely.
The toString method returns a string that contains the values of this Matrix3f.
Variables
The component values of a Matrix3d are directly accessible through the public
variables m00, m01, m02, m10, m11, m12, m20, m21, and m22. To access the element
in row 2 and column 0 of the matrix named rotate, a programmer would write
rotate.m20. Other matrix values are accessed similarly.
Constructors
These constructors each return a new Matrix3d object. The first constructor gen-
erates a 3 × 3 matrix from the nine values provided. The second constructor gen-
erates a 3 × 3 matrix from the first nine values in the array v. The third
constructor generates a 3 × 3 matrix with all nine values set to 0.0. The fourth
and fifth constructors generate a 3 × 3 matrix with the same values as the matrix
m1 parameter.
Methods
These methods set the value of this matrix to the value of the argument.
These methods set the value of the matrix this to a scale matrix with the passed
scale amount.
These two set methods set the value of the matrix this to the matrix conversion
of the axis and angle argument a1.
These two set methods set the value of the matrix this to the matrix conversion
of the quaternion argument q1.
The setElement and getElement methods provide a means for accessing a sin-
gle element within a 3 × 3 matrix using indices. This is not a preferred method of
access, but Java 3D provides these methods for functional completeness. The
setElement method takes a row index row (where a value of 0 represents the
first row and a value of 2 represents the third row), a column index column
(where a value of 0 represents the first column and a value of 2 represents the
third column), and a value. It sets the corresponding element in matrix this to
the specified value. The getElement method also takes a row index row and a
column index column and returns the element at the corresponding locations as a
floating-point value.
The first add method adds the matrix m1 to the matrix m2 and places the result
into the matrix this. The second add method adds the matrix this to the matrix
m1 and places the result into the matrix this. The first sub method performs an
element-by-element subtraction of matrix m2 from matrix m1 and places the result
into the matrix this. The second sub method performs an element-by-element
subtraction of the matrix m1 from the matrix this and places the result into the
matrix this.
This method adds a scalar to each component of the matrix m1 and places the
result into this. Matrix m1 is not modified.
The first method multiplies this matrix by the tuple t and places the result back
into the tuple (t = this*t). The second method multiplies this matrix by the tuple
t and places the result into the tuple result (result = this*t).
The first method transposes this matrix in place. The second method sets the
value of this matrix to the transpose of the matrix m1.
The first method inverts this matrix in place. The second method sets the value of
this matrix to the inverse of the matrix m1.
The determinant method computes the determinant of the matrix this and
returns the computed value.
The three rot methods construct rotation matrices that rotate in a counter-clock-
wise (right-handed) direction around the axis specified by the final letter of the
method name. The constructed matrix replaces the value of the matrix this. The
rotation angle is expressed in radians.
The first mul method multiplies matrix m1 with matrix m2 and places the result
into the matrix this. The second mul method multiplies matrix this with matrix
m1 and places the result into the matrix this.
The first mulNormalize method multiplies this matrix by matrix m1, performs an
SVD normalization of the result, and places the result back into this matrix (this
= SVDnorm(this ⋅ m1)). The second mulNormalize method multiplies matrix m1
by matrix m2, performs an SVD normalization of the result, and places the result
into this matrix (this = SVDnorm(m1 ⋅ m2)).
The first method returns true if all of the data members of Matrix3d m1 are equal
to the corresponding data members in this Matrix3d. The second method returns
true if the Object t1 is of type Matrix3d and all of the data members of t1 are
equal to the corresponding data members in this Matrix3d.
This method returns true if the L∞ distance between this Matrix3d and
Matrix3d m1 is less than or equal to the epsilon parameter. Otherwise, this
method returns false. The L∞ distance is equal to
MAX[i = 0,1,2,; j = 0,1,2,; abs(this.m(i,j) – m1.m(i,j)]
The first method negates the value of this matrix in place (this = –this). The sec-
ond method sets the value of this matrix equal to the negation of the matrix m1
(this = –m1).
This method sets the scale component of the current matrix by factoring out the
current scale (by doing an SVD) and multiplying by the new scale.
This method multiplies each component of the matrix m1 by a scalar and places
the result into this. Matrix m1 is not modified.
The first method multiplies this matrix by the tuple t and places the result back
into the tuple (t = this*t). The second method multiplies this matrix by the
tuple t and places the result into the tuple result (result = this*t).
The hashCode method returns a hash number based on the data values in this
object. Two different Matrix3d objects with identical data values (that is,
equals(Matrix3d) returns true) will return the same hash number. Two
Matrix3d objects with different data members may return the same hash value,
although this is not likely.
The toString method returns a string that contains the values of this Matrix3d.
Variables
The component values of a Matrix4f are directly accessible through the public
variables m00, m01, m02, m03, m10, m11, m12, m13, m20, m21, m22, m23, m30, m31,
m32, and m33. To access the element in row 2 and column 0 of matrix rotate, a
programmer would write rotate.m20. A programmer would access the other
values similarly.
Constructors
These constructors each return a new Matrix4f object. The first constructor gen-
erates a 4 × 4 matrix from the 16 values provided. The second constructor gener-
ates a 4 × 4 matrix from the first 16 values in the array v. The third constructor
generates a 4 × 4 matrix from the quaternion, translation, and scale values. The
scale is applied only to the rotational components of the matrix (upper 3 × 3) and
not to the translational components. The fourth and fifth constructors generate a
4 × 4 matrix with the same values as the passed matrix m1. The sixth constructor
generates a 4 × 4 matrix from the rotation matrix, translation, and scale values.
The scale is applied only to the rotational components of the matrix (upper 3 × 3)
and not to the translational components of the matrix. The final constructor gen-
erates a 4 × 4 matrix with all 16 values set to 0.0.
Methods
The first two set methods set the value of this matrix to the matrix conversion of
the quaternion argument q1. The next two set methods set the value of this
matrix from the rotation expressed by the quaternion q1, the translation t1, and
the scale s. The next two set methods set the value of this matrix to a copy of
the passed matrix m1. The last two set methods set the value of this matrix to the
matrix conversion of the axis and angle argument a1.
These methods set the rotational component (upper 3 × 3) of this matrix to the
matrix values in the m1 argument. The other elements of this matrix are initial-
ized as if this were an identity matrix (that is, an affine matrix with no transla-
tional component).
The first method sets the value of this matrix to a scale matrix with the passed
scale amount. The second method sets the value of this matrix to the row-major
array parameter (that is, the first four elements of the array are copied into the
first row of this matrix, and so forth).
This method sets the value of this matrix to a translation matrix with the passed
translation value.
These methods set the value of this matrix to a scale and translation matrix. In
the first method, the scale is not applied to the translation, and all of the matrix
values are modified. In the second method, the translation is scaled by the scale
factor, and all of the matrix values are modified.
These two methods set the value of this matrix from the rotation expressed by
the rotation matrix m1, the translation t1, and the scale scale. The translation is
not modified by the scale.
The first two methods perform an SVD normalization of this matrix in order to
acquire the normalized rotational component. The values are placed into the
matrix parameter m1. The third method performs an SVD normalization of this
matrix to calculate the rotation as a 3 × 3 matrix, the translation, and the scale.
None of the matrix values in this matrix are modified. The fourth method per-
forms an SVD normalization of this matrix to acquire the normalized rotational
component. The values are placed into the quaternion q1. The final method
retrieves the translational components of this matrix and copies them into the
vector trans.
The setElement and getElement methods provide a means for accessing a sin-
gle element within a 4 × 4 matrix using indices. This is not a preferred method of
access, but Java 3D provides these methods for functional completeness. The
setElement method takes a row index row (where a value of 0 represents the
first row and a value of 3 represents the fourth row), a column index column
(where a value of 0 represents the first column and a value of 3 represents the
fourth column), and a value. It sets the corresponding element in matrix this to
the specified value. The getElement method also takes a row index row and a
column index column and returns the element at the corresponding locations as a
floating-point value.
This method retrieves the upper 3 × 3 values of this matrix and places them into
the matrix m1.
The first method sets the scale component of the current matrix by factoring out
the current scale (by doing an SVD) and multiplying by the new scale. The sec-
ond method performs an SVD normalization of this matrix to calculate and
return the uniform scale factor. If the matrix has non-uniform scale factors, the
largest of the x, y, and z scale factors will be returned.
This method adds a scalar to each component of the matrix m1 and places the
result into this. Matrix m1 is not modified.
This method multiplies each component of the matrix m1 by a scalar and places
the result into this. Matrix m1 is not modified.
These methods set the rotational component (upper 3 × 3) of this matrix to the
matrix values in the passed argument. The other elements of this matrix are
unchanged. In the first two methods, a singular value decomposition is per-
formed on this object’s upper 3 × 3 matrix to factor out the scale, then this
Version 1.2, March 2000 441
A.2.3 Matrix4f Class MATH OBJECTS
object’s upper 3 × 3 matrix components are replaced by the passed rotation com-
ponents, and finally the scale is reapplied to the rotational components. In the
next two methods, a singular value decomposition is performed on this object’s
upper 3 × 3 matrix to factor out the scale, then this object’s upper 3 × 3 matrix
components are replaced by the matrix equivalent of the quaternion, and finally
the scale is reapplied to the rotational components. In the last method, a singular
value decomposition is performed on this object’s upper 3 × 3 matrix to factor
out the scale, then this object’s upper 3 × 3 matrix components are replaced by
the matrix equivalent of the axis-angle, and finally the scale is reapplied to the
rotational components.
This method replaces the upper 3 × 3 matrix values of this matrix with the values
in the matrix m1.
This method modifies the translational components of this matrix to the values of
the vector trans. The other values of this matrix are not modified.
The first add method adds the matrix m1 to the matrix m2 and places the result
into the matrix this. The second add method adds the matrix this to the matrix
m1 and places the result into the matrix this. The first sub method performs an
element-by-element subtraction of matrix m2 from matrix m1 and places the result
into the matrix this. The second sub method performs an element-by-element
subtraction of the matrix m1 from the matrix this and places the result into the
matrix this.
The first transpose method transposes the matrix m1 and places the result into
the matrix this. The second transpose method transposes the matrix this and
places the result back into the matrix this.
The first transform method postmultiplies this matrix by the Point3f point and
places the result back into point. The multiplication treats the three-element
point as if its fourth element were 1. The second transform method postmulti-
plies this matrix by the Point3f point and places the result into pointOut.
The first transform method postmultiplies this matrix by the Vector3f normal
and places the result back into normal. The multiplication treats the three-ele-
ment vector as if its fourth element were 0. The second transform method post-
multiplies this matrix by the Vector3f normal and places the result into
normalOut.
The first transform method postmultiplies this matrix by the tuple vec and
places the result back into vec. The second transform method postmultiplies
this matrix by the tuple vec and places the result into vecOut.
The first method negates the value of this matrix in place (this = –this). The sec-
ond method sets the value of this matrix equal to the negation of the matrix m1
(this = –m1).
The first method inverts this matrix in place. The second method sets the value of
this matrix to the inverse of the matrix m1.
The determinant method computes the determinant of the matrix this and
returns the computed value.
The three rot methods construct rotation matrices that rotate in a counter-clock-
wise (right-handed) direction around the axis specified as the last letter of the
method name. The constructed matrix replaces the value of the matrix this. The
rotation angle is expressed in radians.
The first mul method multiplies matrix m1 with matrix m2 and places the result
into the matrix this. The second mul method multiplies the matrix this with
matrix m1 and places the result in matrix this.
The first method returns true if all of the data members of Matrix4f m1 are equal
to the corresponding data members in this Matrix4f. The second method returns
true if the Object t1 is of type Matrix4f and all of the data members of t1 are
equal to the corresponding data members in this Matrix4f.
This method returns true if the L∞ distance between this Matrix4f and Matrix4f
m1 is less than or equal to the epsilon parameter. Otherwise, this method returns
false. The L∞ distance is equal to
The hashCode method returns a hash number based on the data values in this
object. Two different Matrix4f objects with identical data values (that is,
equals(Matrix4f) returns true) will return the same hash number. Two
Matrix4f objects with different data members may return the same hash value,
although this is not likely.
The toString method returns a string that contains the values of this Matrix4f.
Variables
The component values of a Matrix4d are directly accessible through the public
variables m00, m01, m02, m03, m10, m11, m12, m13, m20, m21, m22, m23, m30, m31,
m32, and m33. To access the element in row 2 and column 0 of matrix rotate, a
programmer would write rotate.m20. A programmer would access the other
values similarly.
Constructors
These constructors each return a new Matrix4d object. The first constructor gen-
erates a 4 × 4 matrix from the 16 values provided. The second constructor gener-
ates a 4 × 4 matrix from the first 16 values in the array v. The third through sixth
constructors generate a 4 × 4 matrix from the quaternion, translation, and scale
values. The scale is applied only to the rotational components of the matrix
(upper 3 × 3) and not to the translational components. The seventh and eighth
constructors generate a 4 × 4 matrix with the same values as the passed matrix.
The final constructor generates a 4 × 4 matrix with all 16 values set to 0.0.
Methods
The first two methods perform an SVD normalization of this matrix in order to
acquire the normalized rotational component. The values are placed into the
passed parameter. The next two methods perform an SVD normalization of this
matrix to calculate the rotation as a 3 × 3 matrix, the translation, and the scale.
None of the matrix values are modified. The next two methods perform an SVD
normalization of this matrix to acquire the normalized rotational component. The
last two methods retrieve the translational components of this matrix.
The setElement and getElement methods provide a means for accessing a sin-
gle element within a 4 × 4 matrix using indices. This is not a preferred method of
access, but Java 3D provides these methods for functional completeness. The
setElement method takes a row index row (where a value of 0 represents the
first row and a value of 3 represents the fourth row), a column index column
(where a value of 0 represents the first column and a value of 3 represents the
fourth column), and a value. It sets the corresponding element in matrix this to
the specified value. The getElement method also takes a row index row and a
column index column and returns the element at the corresponding locations as a
floating-point value.
umn methods copy the matrix values in the specified column into the array or
vector parameter, respectively.
These methods set the rotational component (upper 3 × 3) of this matrix to the
matrix values in the passed argument. The other elements of this matrix are
unchanged. A singular value decomposition is performed on this object’s upper
3 × 3 matrix to factor out the scale, then this object’s upper 3 × 3 matrix compo-
nents are replaced by the passed rotation components, and finally the scale is
reapplied to the rotational components.
These methods set the rotational component (upper 3 × 3) of this matrix to the
matrix values in the passed argument. The other elements of this matrix are
unchanged. A singular value decomposition is performed on this object’s upper
3 × 3 matrix to factor out the scale, then this object’s upper 3 × 3 matrix compo-
nents are replaced by the matrix equivalent of the quaternion, and finally the
scale is reapplied to the rotational components.
This method sets the rotational component (upper 3 × 3) of this matrix to the
equivalent values in the passed argument. The other elements of this matrix are
unchanged. A singular value decomposition is performed on this object’s upper
3 × 3 matrix to factor out the scale, then this object’s upper 3 × 3 matrix compo-
nents are replaced by the matrix equivalent of the axis-angle, and finally the scale
is reapplied to the rotational components.
The two get methods retrieve the upper 3 × 3 values of this matrix and place
them into the matrix m1. The two set methods replace the upper 3 × 3 matrix
values of this matrix with the values in the matrix m1.
This method modifies the translational components of this matrix to the values of
the Vector3d argument. The other values of this matrix are not modified.
The first method sets the scale component of the current matrix by factoring out
the current scale (by doing an SVD) and multiplying by the new scale. The sec-
ond method performs an SVD normalization of this matrix to calculate and
return the uniform scale factor. If the matrix has non-uniform scale factors, the
largest of the x, y, and z scale factors will be returned.
This method adds a scalar to each component of the matrix m1 and places the
result into this. Matrix m1 is not modified.
This method multiplies each component of the matrix m1 by a scalar and places
the result into this. Matrix m1 is not modified.
The first add method adds the matrix m1 to the matrix m2 and places the result
into the matrix this. The second add method adds the matrix this to the matrix
m1 and places the result into the matrix this. The first sub method performs an
element-by-element subtraction of matrix m2 from matrix m1 and places the result
into the matrix this. The second sub method performs an element-by-element
subtraction of the matrix m1 from the matrix this and places the result into the
matrix this.
This method sets the value of this matrix to the row-major array parameter (that
is, the first four elements of the array will be copied into the first row of this
matrix, and so forth).
These methods set the rotational component (upper 3 × 3) of this matrix to the
matrix values in the matrix argument. The other elements of this matrix are ini-
tialized as if this were an identity matrix (that is, an affine matrix with no trans-
lational component).
These methods set the value of this matrix to the value of the passed matrix m1.
These methods set the value of this matrix to the matrix conversion of the quater-
nion argument.
These methods set the value of this matrix to the matrix conversion of the axis
and angle argument.
This method sets the value of this matrix to a translation matrix by the passed
translation value.
These methods set the value of this matrix to the rotation expressed by the
quaternion q1, the translation t1, and the scale s.
This method sets the value of this matrix to a scale matrix with the passed scale
amount.
This method sets the value of this matrix to a scale and translation matrix. The
scale is not applied to the translation, and all of the matrix values are modified.
MATH OBJECTS Matrix4d Class A.2.4
This method sets the value of this matrix to a scale and translation matrix. The
translation is scaled by the scale factor, and all of the matrix values are modified.
These methods set the value of this matrix from the rotation expressed by the
rotation matrix m1, the translation t1, and the scale s.
The first method sets the value of this matrix to the negation of the m1 parameter.
The second method negates the value of this matrix (this = –this).
The first transpose method transposes the matrix m and places the result into the
matrix this. The second transpose method transposes the matrix this and
places the result back into the matrix this.
The first two transform methods postmultiply this matrix by the tuple vec and
place the result back into vec. The last two transform methods postmultiply this
matrix by the tuple vec and place the result into vecOut.
The first two transform methods postmultiply this matrix by the point argument
point and place the result back into point. The multiplication treats the
three-element point as if its fourth element were 1. The last two transform
methods postmultiply this matrix by the point argument point and place the
result into pointOut.
The first two transform methods postmultiply this matrix by the vector argu-
ment normal and place the result back into normal. The multiplication treats the
three-element vector as if its fourth element were 0. The last two transform
methods postmultiply this matrix by the vector argument normal and place the
result into normalOut.
The first method inverts this matrix in place. The second method sets the value of
this matrix to the inverse of the matrix m1.
The determinant method computes the determinant of the matrix this and
returns the computed value.
The first mul method multiplies matrix m1 with matrix m2 and places the result
into the matrix this. The second mul method multiplies matrix this with matrix
m1 and places the result into the matrix this.
The first method returns true if all of the data members of Matrix4d m1 are equal
to the corresponding data members in this Matrix4d. The second method returns
true if the Object t1 is of type Matrix4d and all of the data members of t1 are
equal to the corresponding data members in this Matrix4d.
This method returns true if the L∞ distance between this Matrix4d and
Matrix4d m1 is less than or equal to the epsilon parameter. Otherwise, this
method returns false. The L∞ distance is equal to
MAX[i = 0,1,2,3; j = 0,1,2,3; abs(this.m(i,j) – m1.m(i,j)]
The hashCode method returns a hash number based on the data values in this
object. Two different Matrix4d objects with identical data values (that is,
equals(Matrix4d) returns true) will return the same hash number. Two
Matrix4d objects with different data members may return the same hash value,
although this is not likely.
The toString method returns a string that contains the values of this Matrix4d.
The GMatrix data members are not public, thus allowing efficient implementa-
tions of sparse matrices. However, the data members can be modified through
public accessors. The class includes three different constructors for creating
matrices and several operators for manipulating these matrices.
Constructors
These constructors each return a new GMatrix. The first constructor generates an
nRow by nCol identity matrix. Note that because row and column numbering
begins with zero, nRow and nCol will be one larger than the maximum possible
matrix index values. The second constructor generates an nRow by nCol matrix
initialized to the values in the array matrix. The last constructor generates a new
GMatrix and copies the initial values from the parameter matrix argument.
Methods
The first mul method multiplies matrix m1 with matrix m2 and places the result
into this. The second mul method multiplies this matrix with matrix m1 and
places the result into this.
The first add method adds this matrix to matrix m1 and places the result back into
this. The second add method adds matrices m1 and m2 and places the result into
this. The first sub method subtracts matrix m1 from the matrix this and places
the result into this. The second sub method subtracts matrix m2 from matrix m1
and places the result into the matrix this.
The first method negates the value of this matrix in place (this = –this). The sec-
ond method sets the value of this matrix to the negation of the matrix m1 (this =
–m1).
The first method inverts this matrix in place. The second method sets the value of
this matrix to the inverse of the matrix m1.
This method subtracts this matrix from the identity matrix and puts the values
back into this (this = I – this).
This method copies a submatrix derived from this matrix into the target matrix.
The rowSource and colSource parameters define the upper left of the submatrix.
The numRow and numCol parameters define the number of rows and columns in
the submatrix. The submatrix is copied into the target matrix starting at (rowD-
est, colDest). The target parameter is the matrix into which the submatrix will
be copied.
This method changes the size of this matrix dynamically. If the size is increased,
no data values will be lost. If the size is decreased, only those data values whose
matrix positions were eliminated will be lost.
The first set method sets the values of this matrix to the values found in the
matrix array parameter. The values are copied in one row at a time, in
row-major fashion. The array should be at least equal in length to the number of
matrix rows times the number of matrix columns in this matrix. The second set
method sets the values of this matrix to the values found in matrix m1. The last
four set methods set the values of this matrix to the values found in matrix m1.
The first two methods place the values in the upper 3 × 3 of this matrix into the
matrix m1. The next two methods place the values in the upper 4 × 4 of this
matrix into the matrix m1. The final method places the values in this matrix into
the matrix m1. Matrix m1 should be at least as large as this matrix.
The getNumRow method returns the number of rows in this matrix. The getNum-
Col method returns the number of columns in this matrix.
These methods set and retrieve the value at the specified row and column of this
matrix.
The setRow methods copy the values from the array into the specified row of this
matrix. The getRow methods place the values of the specified row into the array
or vertex. The setColumn methods copy the values from the array into the spec-
ified column of this matrix or vector. The getColumn methods place the values of
the specified column into the array or vector.
This method sets this matrix to a uniform scale matrix, and all of the values are
reset.
The first transpose method transposes this matrix in place. The second trans-
pose method places the matrix values of the transpose of matrix m1 into this
matrix.
This method returns a string that contains the values of this GMatrix.
This method returns a hash number based on the data values in this object. Two
different GMatrix objects with identical data values (that is, equals(GMatrix)
returns true) will return the same hash number. Two objects with different data
members may return the same hash value, although this is not likely.
The first method returns true if all of the data members of GMatrix m1 are equal
to the corresponding data members in this GMatrix. The second method returns
true if the Object o1 is of type GMatrix and all of the data members of o1 are
equal to the corresponding data members in this GMatrix.
This method returns true if the L∞ distance between this GMatrix and GMatrix
m1 is less than or equal to the epsilon parameter. Otherwise, this method returns
false. The L∞ distance is equal to
The SVD method finds the singular value decomposition (SVD) of this matrix
such that this = U * W * VT, and returns the rank of this matrix. The values of
U, W, and V are all overwritten. Note that the matrix V is output as V and not VT.
If this matrix is m × n, then U is m × m, W is a diagonal matrix that is m × n, and
V is n × n. The inverse of this matrix is this–1 = V * W–1 * UT, where W–1 is a
diagonal matrix computed by taking the reciprocal of each of the diagonal ele-
ments of matrix W.
B.1 Compression
The process of geometry compression is as follows:
1. The geometry to be compressed is converted into a generalized mesh form,
which allows a triangle to be, on average, specified by 0.80 vertices.
2. The data for each vertex component of the geometry is converted to the
most efficient representation format for its type and then quantized to as
few bits as possible.
3. These quantized bits are differenced between successive vertices, and the
results are modified Huffman-encoded into self-describing variable-bit-
length data elements.
4. These variable-length elements are strung together using Compressed Ge-
ometry’s seven geometry instructions into a final Compressed Geometry
block.
B.2 Decompression
For pure software implementations, upon receipt, compressed geometry blocks are
decompressed into the local host’s preferred geometry format by reversing the
above process. This decompression can be performed in a lazy manner, avoiding
full expansion into memory until the geometry is needed for rendering.
strip’s ability to effectively change from “strip” to “fan” mode in the middle of a
strip allows more complex geometry to be represented compactly, and requires less
input data bandwidth. The restart capability allows several pieces of disconnected
geometry to be passed as one data block. Figure B-1 shows a single generalized tri-
angle strip and the associated replacement codes.
Triangles are normalized such that the front face is always defined by a counter-
clockwise vertex order after transformation (assuming a right-handed coordinate
system). To support this, there are two flavors of restart: restart (counterclock-
wise and restart_reverse (clockwise). The vertex order is reversed after every
replace_oldest, but remains the same after every replace_middle.
Vertex Codes 1 3 5
1 Restart
2 RO
3 RO
4 RO 2 4 6
5 RO Triangle Strip
6 RO
7 Restart 10 9
8 RO
9 RO 7
10 RM 11 8
11 RM 14
12 RM
Triangle Fan
13 RM
12 13
14 RM
15 Restart 15 17
16 RO
Independent
17 RO Triangle
18 Restart
19 RO
16
20 RO
21 RO 20 21
22 Restart
23 RO Independent
24 RO Quad
25 RO 18 19
26 RO
33
27 RO 31
28 RO 30
29 RM 32
30 RM 22 24 29
31 RM 26
32 RM
28
33 RO
23 25 27
RO = Replace Oldest
RM = Replace Middle Mixed Strip
2 2 2
2 2 3
1 1 2 2
2 2 2
1 1 1
1 1 1
7
6 9 1 1
Start 8
2 3
1 4 5
Generalized Triangle Strip
R6, O1, O7, O2, O3, M4, M8, O5, O9, O10, M11,
M17, M16, M9, O15, O8, O7, M14, O13, M6,
O12, M18, M19, M20, M14, O21, O15, O22, O16,
O23, O17, O24, M30, M29, M28, M22, O21, M20,
M27, O26, M19, O25, O18
Generalized Triangle Mesh
R6p, O1, O7p, O2, O3, M4, M8p, O5, O9p, O10, M11,
M17p, M16p, M-3, O15p, O-5, O6, M14p, O13p, M-9,
O12, M18p, M19p, M20p, M-5, O21p, O-7, O22p, O-
9,
O23, O-10, O-7, M30, M29, M28, M-1, O-2, M-3,
M27, O26, M-4, O25, O-5
While it can be represented by one triangle strip, many of the interior vertices
appear twice in the strip. This is inherent in any approach wishing to avoid refer-
ences to old data. Some systems have tried using a simple regular mesh buffer to
support reuse of old vertices, but there is a problem with this approach in practice:
In general, geometry does not come in a perfectly regular rectangular mesh struc-
ture.
The generalized technique employed by compressed geometry addresses this prob-
lem. Old vertices are explicitly pushed into a queue, and then explicitly referenced
in the future when the old vertex is desired again. This fine control supports irreg-
ular meshes of nearly any shape. Any viable technique must recognize that storage
is finite; thus the maximum queue length is fixed at 16, requiring a four-bit index.
We refer to this queue as the mesh buffer. The combination of generalized triangle
strips and mesh buffer references is referred to as a generalized triangle mesh.
The fixed mesh buffer size requires all tessellators or restripifiers for compressed
geometry to break up any runs longer than 16 unique references. Since compressed
geometry is not meant to be programmed directly at the user level, but rather by
sophisticated tessellators or reformatters, this is not too onerous a restriction. Six-
teen old vertices allow up to 94 percent of the redundant geometry to avoid being
respecified. Figure B-2 also contains an example of a general mesh buffer repre-
sentation of the surface geometry.
The language of compressed geometry supports the four vertex replacement codes
of generalized triangle strips (replace oldest, replace middle, restart, and restart
reverse), and adds another bit in each vertex header to indicate if this vertex should
be pushed into the mesh buffer or not. The mesh buffer reference instruction has a
four-bit field to indicate which old vertex should be re-referenced, along with the
two-bit vertex replacement code. The semantics of a mesh buffer reference is that
they do not have an option to re-push their data into the mesh buffer; old vertices
can only be recycled once.
Geometry rarely is composed purely of positional data; generally a normal and/or
color are also specified per vertex. Therefore, mesh buffer entries are required to
contain storage for all associated per-vertex information (specifically including
normals, and colors.
For maximum space efficiency, when a vertex is specified in the data stream, (per-
vertex) normal and/or color information should be directly bundled with the posi-
tion information. This bundling is controlled by two state bits: bundle normals with
vertices (bnv), and bundle colors with vertices (bcv). When a vertex is pushed into
the mesh buffer, these bits control whether its bundled normal and/or color are
pushed as well. During a mesh buffer reference instruction, this process is reversed.
The two bits specify if a normal and/or color should be inherited from the mesh
buffer storage, or inherited from the current normal or current color.
There are explicit instructions for setting these two current values. An important
exception to this rule occurs when an explicit “set current normal” instruction is
followed by a mesh buffer reference, with the bnv state bit active. In this case, the
former overrides the mesh buffer normal. This allows compact representation of
hard edges in surface geometry. The analogous semantics are also defined for col-
ors, allowing compact representation of hard edges in images embeded as geome-
try.
For all these reasons, we decided to use a regular grid in the angular space within
one sextant as our distribution. Thus, rather than a monolithic 11-bit index, all nor-
mals within a sextant are much more conveniently represented as two 6-bit orthog-
onal angular addresses, revising our grand total to 18 bits. Just as for positions and
colors, if more quantization of normals is acceptable, then these 6-bit indices can
be reduced to fewer bits, and thus absolute normals can be represented using any-
where from 18 to as few as 6 bits. But as will be seen, we can delta-encode this
space, further reducing the number of bits required for high-quality representation
of normals.
This triangular-shaped patch runs from 0 to π/4 radians in θ, and from 0 to as much
as 0.615479709 radians in φ: φmax.
Quantized angles are represented by two n-bit integers θ̂ n and φ̂ n , where n is in the
range of 0 to 6. The sextant coordinate system defined by these parameters is shown
in Figure B-4, for the case of n = 6. For a given n, the relationship between these
indices θ and φ is
–1 n n
θ ( θ̂ n ) = sin ( tan ( φ max ⋅ ( 2 – θ̂ n ) ⁄ 2 ) )
(B.2)
n
φ(φ̂ n) = φ max ⋅ φ̂ n ⁄ 2
These two equations show how values of θ̂ n and φ̂ n can be converted to spherical
coordinates θ and φ, which in turn can be converted to rectilinear normal coordinate
components via equation B.1.
To reverse the process, for example, to encode a given normal n into θ̂ n and φ̂ n , one
cannot just invert equation B.2. Instead, the n must first be folded into the canonical
octant and sextant, resulting in n'. Then n' must be dotted with all quantized nor-
mals in the sextant. For a fixed n, the values of θ̂ n and φ̂ n that result in the largest
(nearest unity) dot product define the proper encoding of n.
Now the complete bit format of absolute normals can be given. The uppermost
three bits specify the sextant, the next three bits the octant, and finally two n-bit
fields specify θ̂ n and φ̂ n . The three-bit sextant field takes on one of six values, the
binary codes for which are shown in Figure B-3.
Y
Y>Z X<Y
Y<Z
101 010 X>Y
X 100 000
Z
Z<Z X>Z
X=Z
This discussion has ignored some details. In particular, the three normals at the cor-
ners of the canonical patch are multiply represented (6, 8, and 12 times). By
employing the two unused values of the sextant field, these normals can be
uniquely encoded as special normals. The normal sub-instruction describes the
special encoding used for two of these corner cases (14 total special normals).
This representation of normals is amenable to delta encoding. Within a given sex-
tant, the delta code between two normals is simply the difference in θ̂ n and φ̂ n : ∆θ̂ n
and ∆φ ˆ n.
θ̂ n
64
001 011
56
101 010
48
100 000
40
32
24
16
0 φ̂ n
0 8 16 24 32 40 48 56 64
The left edge of a sextant will always be another sextant within the same octant, as
will be the diagonal edge of a sextant. Note that the coordinate system of a sextant
is only defined for coordinate values in the triangular region of the sextant.
For a given value of n (in the range of 1 to 6), where n is the number of bits of quan-
tization of the sextant coordinates, the valid coordinates are bounded by θ̂ n ≥ 0, φ̂ n
≥ 0, and θ̂ n + φ̂ n ≤ 2n. For any given sextant number, the left and diagonal neighbors
of that sextant are explicitly known. The bottom edge of a sextant will be the same
sextant number, but in a different octant. The octant will differ from the current
octant by the flip of exactly one of the sign bits. Which octant sign bit will be
flipped is also explicitly known. The rules for finding each edge neighbor for any
sextant are given in Table B-1.
Invert θ̂ n Diagonal
neighbor
update sextant 2n – θ̂ n and 2n – φ̂ n
update sextant
Left
neighbor Sextant
Bottom
neighbor
Flip one bit of octant number
Bottom
In Compressed Geometry, all component delta fields and all component absolute
fields (except component absolute normal fields) are represented by signed num-
bers. For each different coordinate component type, there are different wrap rules
for what happens when a delta component overflows the absolute representation
range. For positions, both positive and negative component values are legal, and
overflowing past the largest positive component value is explicitly defined to wrap
the coordinate to negative values, and overflowing the most negative component
value wraps to the positive values. For colors, negative component values are ille-
gal, and wrapping out of the positive component values is illegal. For normals, spe-
cial wrapping rules allow delta values to change the current sextant or octant in
certain cases, without explicitly specifying the new sextant or octant.
The special rules for wrapping during normal deltas are:
• Normal Case:
if θ̂ n + ∆θ̂ n ≥ 0 , φ̂ n + ∆φ
ˆ n ≥ 0, θ̂ n + ∆θ̂ n + φ̂ + ∆φ
n
ˆ n ≤ 2n :
new θ̂ n ← θ̂ n + ∆θ̂ n , new φ̂ n ← φ̂ n + ∆φ ˆ n ,
current sextant and octant unchanged.
• Left Edge Wrap Case:
if θ̂ n + ∆θ̂ n < 0 , φ̂ n + ∆φ
ˆ n ≥ 0, -( θ̂ n + ∆θ̂ n ) + φ̂ + ∆φ
n
ˆ n ≤ 2n :
new θ̂ n ← −( θ̂ n + ∆θ̂ n ), new φ̂ n ← φ̂ n + ∆φ ˆ n ,
current sextant updated from left edge rules in Table B-1, current octant
unchanged.
• Diagonal Edge Wrap Case:
if θ̂ n + ∆θ̂ n ≥ 0 , φ̂ n + ∆φ
ˆ n ≥ 0, θ̂ n + ∆θ̂ n + φ̂ + ∆φ
n
ˆ n > 2n :
new θ̂ n ← 2n - ( θ̂ n + ∆θ̂ n ) , new φ̂ n ← 2n - ( φ̂ n + ∆φ
ˆ n),
current sextant updated from diagonal edge rules in Table B-1, current octant
unchanged.
• Bottom Edge Wrap Case:
if θ̂ n + ∆θ̂ n ≥ 0 , φ̂ n + ∆φ
ˆ n < 0, θ̂ n + ∆θ̂ n − ( φ̂ + ∆φ
n
ˆ n ) ≤ 2n :
new θ̂ n ← θ̂ n + ∆θ̂ n , new φ̂ n ← −( φ̂ n + ∆φ ˆ n),
current sextant unchanged, current octant updated from bottom edge rules in
Table B-1.
Any wrap that does not fall into one of these categories is an illegal delta, and is not
allowed within a valid Compressed Geometry stream.
(Note that while the wrapping is defined here in terms of a given normal component
quantization value n, in most implementations the wrapping would be applied after
the current component values and delta values have been normalized into the great-
est allowed values, e.g., n = 6.)
bit. The tag becomes irrelevant because there is nothing to differentiate. In general,
there are only a few specialized cases where zero length tags are useful.
One additional complication was required to enable reasonable hardware imple-
mentations. As will be seen in a later section, all instructions are broken up into an
eight-bit header and a variable-length body. Sufficient information is present in the
header to determine the length of the body. But to give the hardware time to process
the header information, the header of one instruction must be placed in the stream
before the body of the previous instruction. Thus the sequence … B0 H1B1 H2B2
H3 … has to be encoded as follows:
… H1 B0 H2 B1 H3 B2 …
This header forwarding is applied to all instructions. The vertex instruction option-
ally had one or two sub-fields that need forwarded headers. In these special cases
the headers are only six bits in length, because no opcode is present.
0 0 0 0 0 0 0 1 Bit 0 – 31
The variable length no-operation (nop) instruction has an 8-bit opcode, a 5-bit
count field, and a 0- to 31-bit field of zeros. The total length of the variable-length
no-operation instruction is between 13 and 44 bits.
The variable-length nop instruction’s primary use is to align compressed geometry
instructions to word boundaries, when desired. This is useful if one wishes to
“patch” a compressed geometry instruction in the middle of a stream without hav-
ing to bit-align the patch.
bc
ca
0 0 0 1 1 0 0 0
The setState instruction has a 7-bit opcode, 3 bits of state to be set, and a spare,
for a total length of 11 bits. The first and second state bits indicate if normals and/
or colors will be bundled with vertex instructions, respectively. The third state bit
indicates if colors will contain an alpha value, in addition to the standard RGB. The
final state bit is unused, and reserved for future use.
In the assembly syntax, the specific unbundling of a value is indicated by three
unbundling tags: {normalsUnbundled} {colorsUnbundled} {alphaUnbundled}.
The six possible bundling can be combined in almost any order. If a tag is not
present for either bundling or unbundling a value, then the value is implicitly
unbundled. It is an error to have both a bundled and unbundled tag present for the
same value in the same setState instruction.
vertex
mbp
Position bits 0 – 5 rep Position bits
0 1 6–n
Normal bits Color bits
setNormal
1 1 Normal bits 0 – 5 Normal bits 6 – n
setColor
mbr (meshBufferReference)
rep
rep
0 0 1 Index
setState
cap
bnv
bcv
0 0 0 1 1 0 0 0
setTable
0 0 0 1 0 Table Entry
Address range
nop
0 0 0 0 0 0 0 1 Bit Count 0s
Position Tag ∆X ∆Y ∆Z
Normal Tag ^
∆θ ^
∆φ
t t Relative
Color Tag ∆R ∆G ∆B ∆α
The setTable instruction has a 5-bit op code, a 2-bit table field, a 7-bit address/
range field, a 4-bit data length field, an absolute/relative bit, and a 4-bit up-shift
field. The total instruction length is fixed at 23 bits. The table and address/range
fields specify which decompression table entries to update; the remaining fields
comprise the values to which to update the table entries.
The two-bit table specifies for which of the three decompression tables this update
is targeted:
00 Position
01 Color
10 Normal
11 Unused—reserved for future use
The seven-bit address/range field specifies which entries in the specified table are
to be set to the values in the following fields.
Address/Range Semantics Implicit Tag
Length
1a5a4a3a2a1a0 set table entry a5a4a3a2a1a0 6
01a5a4a3a2a1 set table entry a5a4a3a2a10 through a5a4a3a2a11 5
001a5a4a3a2 set table entry a5a4a3a200 through a5a4a3a211 4
0001a5a4a3 set table entry a5a4a3000 through a5a4a3111 3
00001a5a4 set table entry a5a40000 through a5a41111 2
000001a5 set table entry a500000 through a511111 1
0000001 set table entry 000000 through 111111 0
The idea is that table settings are made in aligned power-of-two ranges. The posi-
tion of the first ‘1’ bit in the address/range field indicates how many entries are to
be consecutively set; the remaining bits after the first ‘1’ are the upper address bits
of the base of the table entries to be set. This also sets the length of the “tag” that
this entry defines as equal to the number of address bits (if any) after the first ‘1’ bit.
The data length specifies how large the delta values to be associated with this tag
are; a data length of 12 implies that the upper 4 bits are to be sign extensions of the
incoming delta value. Note that the data length describes not the length of the delta
value coming in, but the final position of the delta value for reconstruction. In other
words, the data length field is the sum of the actual delta bits to be read in plus the
up-shift amount. For the position and color tables, the data length values of 1 to 15
correspond to lengths of 1 to 15, but the data length value of 0 encodes an actual
length of 16, as a length of 0 makes no sense for positions and colors. For normals,
a length of 0 is sometimes appropriate, and the maximum length needed is only 7.
Thus for normals, the values 0 to 7 map through 0 to 7, and 8 to 15 are not used.
The up-shift value is the number of bits that the delta values described by these tags
will be shifted up before being added to the current value. The up-shift is useful for
quantizing the data to save space; it cannot be used to extend the range of the data
represented. You are still limited to 16 bits (less for normals) for the resultant data
even with a large up-shift value. The up-shift amount is essentially the number of
low bits that you don’t need to specify in the incoming data as they will always be
zero. It is illegal for the up-shift to be greater than or equal to the data length.
So, there are three portions of the resultant data: the sign extension, the incoming
data, and the up-shift. For example, if you have a position with a data length of 12
and an up-shift of 4, then the resultant data is made up of 4 sign extension bits in
the high bits, 8 bits of incoming data, and 4 bits of zero in the low bits, for the up-
shift.
The absolute/relative flag indicates whether this table entry describes values that
are to be interpreted as an absolute reference or a relative delta – a 0 value indicates
relative, a 1 value indicates absolute. Note that for normals, absolute references
will have an additional six leading bits describing the absolute octant and sextant.
rep
0 0 1 Index
There is no mesh buffer re-push bit; mesh buffer contents may be referenced mul-
tiple times until 16 newer vertices have been pushed; if a vertex is still needed it
must be resent.
In general, the semantics of executing a mesh buffer reference instruction is nearly
the same as executing a vertex instruction with data fields identical to those con-
tained at the indicated mesh buffer location. There are, however, several subtle dif-
ferences. First, as previously indicated, a mesh buffer reference never causes new
values to appear in the mesh buffer; nor does it cause any mesh buffer values to go
away.
Second, the effects of any intervening setState instructions changing the bundling
bits need to be considered. If normals were bundled when the vertex was original
pushed into the mesh buffer, but normals are not bundled when the mbr instruction
is executed, the old normal value does not replace the current normal value.
Instead, the mbr instruction will use the current setting of the normal value. The
same logic applies to colors and alpha. A mbr instruction only access the mesh
buffer for those vertex components that are currently bundled.
The inverse case is considered an error: if normals were not bundled at the time the
vertex instruction pushed a vertex into the mesh buffer, but normals are bundled at
the time of execution of the mbr instruction, the normal value will be undefined.
Such a sequence will result in an invalid Compressed Geometry object. Once
again, the same logic applies for colors. A push in a vertex instruction causes only
the currently bundled vertex components to be stored into the mesh buffer.
Version 1.2, March 2000 481
B.12.5 Position Sub-instruction 3D GEOMETRY COMPRESSION
There is one more special case: when normals are bundled, if a setNormal instruc-
tion was executed before a mbr instruction, and the instructions executed between
these two do not include any vertex or setState (or mbr) instructions, the semantics
of normal override apply. The semantics is that rather than inheriting all the data
fields of the vertex from the stored mesh buffer values, the normal value is instead
taken from the current normal value, as set by the setNormal instruction. This is to
allow for hard edges in otherwise shared geometry. The idea is that otherwise there
is no logical reason for a setNormal instruction that would have been invalidated
by the inheritance within the mbr instruction. Once again, a similar logic applies to
setColor instructions, and the generation of a color override condition. This sup-
ports hard edges in colors. Note that any overrides are invalidated by setState or
vertex instructions, and also are no longer in effect after a mbr instruction is
encountered.
Another effect of overrides is to override the invalidity of normals or colors having
not been bundled with vertices at the time of vertices being pushed into the mesh
buffer.
For clarity, because it is by far the most typical case, the three coordinate bit fields
are labeled ∆X ∆Y ∆Z, though more properly they are X, Y, and Z fields; their
actual interpretation is absolute or relative depending on the setting of that bit in
the decompression table entry corresponding to the tag field. In both cases the
fields are signed two’s-complement numbers.
You must always specify at least one absolute position before using any relative
positions. It is illegal to have a relative position before the first absolute position.
It appears that, depending on the current position, half of the possible delta values
are illegal. (For ease of understanding these examples, we will treat positions as
integers.) For instance, going +10,000 from 30,000 will wrap past the positive limit
of 32,767 for signed 16-bit two’s complement arithmetic. However, this turns out
to be very useful. For example, if your current X position is –20,000 and the next
X position is 30,000 then the difference that you’d like to use as a delta is +50,000,
which is not directly representable. When you compute that difference using 16-bit
arithmetic, the value wraps to –15,536, which can be represented as a delta. When
–15,536 is added back to –20,000 on decompression, instead of getting –35,536,
again the 16-bit arithmetic wraps and we get 30,000, which is the desired result.
and/or delta (or absolute) fields must be expanded so that the total number bits used
for the entire sub-instruction is at least six.
For clarity, because it is by far the most typical case, the color component bit-fields
are labeled ∆R ∆G ∆B (∆α), though more properly they are R, G, and B fields; their
actual interpretation is absolute or relative depending on the setting of that bit in
the decompression table entry corresponding to the tag field. In both cases the
fields are signed two’s-complement numbers. A sign bit is required for absolute
color components. Negative color components make no sense and are ill-defined,
so the sign bit on absolute components should always be zero. Similarly for delta
color components, a negative result from adding a delta component to the current
component makes no sense, and so negative results are also ill-defined.
If the most recent setting of the cap bit by a setState instruction is zero, then no
fourth (alpha) field will be expected, and must not be present. If the cap bit was set,
then the alpha field will be processed and must be present.
You must always specify at least one absolute color before using any relative col-
ors. It is illegal to have a relative color before the first absolute color.
The rest of the graphics pipeline and frame buffer following the geometry decom-
pression stage may choose not to use all (up to) 16 bits of color component infor-
mation; in this case it is acceptable to truncate the trailing bits during
decompression. What the geometry decompression format does require is that
color setting of any size up to 16 bits be supported, even if all the bits are not used.
Typically, implementations may use just 12 bits, 8 bits, or even 5 bits from each
color component.
0–6 4
Normal (absolute
Tag 11 Special
special)
Assembly syntax: Table B-4 below shows the assembly syntax for specifying the
special normals in the “Special” field of Normal sub-instructions (as well as set-
Normal instructions).
Table B-4 Syntax for Specifying Special Normals
Syntax Special NX NY NZ Comment
+00 0000 1.0 0.0 0.0 +X axis
-00 0010 –1.0 0.0 0.0 –X axis
0+0 0100 0.0 1.0 0.0 +Y axis
0-0 0110 0.0 –1.0 0.0 –Y axis
00+ 1000 0.0 0.0 1.0 +Z axis
00- 1010 0.0 0.0 –1.0 –Z axis
+++ 0001 1 ⁄ 3 1 ⁄ 3 1 ⁄ 3 +X +Y +Z
++- 0011 1 ⁄ 3 1 ⁄ 3 1 ⁄ 3 +X +Y –Z
+-+ 0101 1 ⁄ 3 1 ⁄ 3 1 ⁄ 3 +X –Y +Z
+-- 0111 1 ⁄ 3 1 ⁄ 3 1 ⁄ 3 +X –Y –Z
-++ 1001 1 ⁄ 3 1 ⁄ 3 1 ⁄ 3 –X +Y +Z
-+- 1011 1 ⁄ 3 1 ⁄ 3 1 ⁄ 3 –X +Y –Z
The Normal sub-instruction can appear within either a compressed geometry ver-
tex instruction or setNormal instruction. The tag field can be between 0 and 6 bits
in length; the last two angle fields will have the same length, between 0 and 7 bits
for deltas and between 0 and 6 bits for absolutes. Six more bits are always present
for absolute normals. The range of sizes for a relative normal can be from 6 to 20
bits, and an absolute normal can be from 6 to 24 bits.
As usual, the first six bits of the sub-instruction are actually forwarded ahead of the
rest of the instruction. Depending on the length of the tag and delta fields, the first
six bits might only contain the tag, or the tag and some of the other field bits, or any
subset up to the entire sub-instruction, if short enough. However, in the case of rel-
ative normals, it is possible for the entire sub-instruction to be too short. It is not
allowed for the tag together with the delta angle fields to be smaller than the six bits
that gets forwarded ahead. There can be no “empty” bits in the forwarded header.
If necessary, the tag and/or delta angle fields must be expanded so that the total
number bits used for the entire sub-instruction is at least six.
A Normal sub-instruction is interpreted as relative or absolute depending on the
current setting of that bit in the decompression table entry corresponding to the tag
field. Unlike the Position and Color sub-instructions, the number of fields of a
Normal instruction differ between the absolute and relative types.
When the sub-instruction is relative, there are two delta angle fields after the tag
field, both of the same length, up to seven bits. These two fields are signed two’s-
complement numbers. If after delta addition the resulting angle is outside the cur-
rent sextant or octant, the sextant/octant wrapping rules (described elsewhere)
apply. If zero-length angle fields are specified, this is equivalent to specifying a
zero value for both fields, i.e., no change from the previous normal. It may be easier
to use this method rather than turning off normal bundling for a small number of
identical normals.
When the sub-instruction is absolute, four bit fields follow the tag. The first is a
three-bit (fixed-length) absolute sextant field, indicating in which of six sextants of
an octant of the unit sphere this normal resides. The second field is also fixed at
three bits, and indicates in which octant of the unit sphere the normal resides. The
last two fields are absolute angles within the sextant, and are unsigned positive
numbers, up to six bits in length. If zero-length angle fields are specified, this is
equivalent to specifying a zero for both fields.
At least one absolute normal must be specified before using any relative normals.
It is an error to have any relative normals before the first absolute normal.
Note that sextants are triangular in shape, thus range of valid angular coordinates
within a sextant fills only half the square, plus the diagional. Formally, after shift
normilization, angular coordinates in ordinary absolute normals must obey the
rule:
θ̂ 6 + φ̂ 6 ≤ 64, 0 ≤ θ̂ 6 < 64, 0 ≤ φ̂ 6 < 64
A number of normals lie on the edges or corners where sextants meet (e.g., at θ̂ n =
0 and φ̂ n = 0). These normals do not have a unique encoding; the same normal can
be specified using different sextants or octants. All of these encodings are legal;
usually the choice of encoding is decided by using the one that makes it the eaisest
to compute deltas from the previous and/or to the following normal.
Fourteen special absolute normals are encoded by the unused two settings within
the three sextant bits. This is indicated by specifying the angle fields to have a
length of zero (not present), and the first two bits of the sextant field to both have a
value of 1. Table B-5 lists the 14 special normals
Table B-5 The 14 Special Normals
Special NX NY NZ Comment
0000 1.0 0.0 0.0 +X axis
0010 –1.0 0.0 0.0 –X axis
0100 0.0 1.0 0.0 +Y axis
0110 0.0 –1.0 0.0 –Y axis
1000 0.0 0.0 1.0 +Z axis
1010 0.0 0.0 –1.0 –Z axis
0001 1⁄ 3 1⁄ 3 1⁄ 3 +X +Y +Z
0011 1⁄ 3 1⁄ 3 1⁄ 3 +X +Y –Z
0101 1⁄ 3 1⁄ 3 1⁄ 3 +X –Y +Z
0111 1⁄ 3 1⁄ 3 1⁄ 3 +X –Y –Z
1001 1⁄ 3 1⁄ 3 1⁄ 3 –X +Y +Z
1011 1⁄ 3 1⁄ 3 1⁄ 3 –X +Y –Z
1101 1⁄ 3 1⁄ 3 1⁄ 3 –X –Y +Z
1111 1⁄ 3 1⁄ 3 1⁄ 3 –X –Y –Z
Special normals are always absolute normals; they cannot be delta’d to from a pre-
vious normal. Unlike ordinary absolute normals, delta normals have the additional
restriction that they cannot be delta’d from. Thus the next normal after any special
normal must always be an absolute normal (ordinary or special). In some cases this
overhead can be avoided by avoiding ever landing on a special normal, when this
purtibation of the data is does not negitively impact the visual apperance of the
object.
The rest of the graphics pipeline and frame buffer following the geometry decom-
pression stage may choose not to use all (up to) 16 bits of normal component infor-
mation; in this case it is acceptable to truncate the trailing bits during
decompression. What the compressed geometry format does require is that normal
settings of any size up to 18-bit absolute normals be supported, even if all the
decompressed bits are not used.
The mesh buffer push bit indicates whether this vertex should be pushed into the
mesh buffer so as to be eligible for later re-reference.
The Position, Normal, and Color sub-instructions have the semantics docu-
mented in their individual sections.
Assembly syntax: relative: (setNormal <Tag> < ∆θ̂ n > < ∆φ̂ n >)
Assembly syntax: absolute special: (setNormal <Tag> <Special>)
Assembly syntax: <Sextant>, <Octant>, <Special>: same as for normal sub-
instruction.
The setNormal instruction has a two-bit opcode, and a Normal sub-instruction.
The Normal sub-instruction has the semantics documented in Section B.12.7,
“Normal Sub-instruction.”
If a SetNormal instruction is present immediately before a mbr instruction, then the
new normal value overrides the normal data present in the mesh buffer for that par-
ticular mesh buffer reference.
One slight complexity: the get_header_bits() only extracts six bits of header for
color or normal sub-instructions of a vertex instruction. It extracts a full eight
bits of header in all other cases.
fixed-length value. The fixed-length value for position and color components is 16
bits in length (sign, unit, 14 fraction); the fields for normal angles are 7 bits
(signed), and 3 each for sextant and octant (if present).
absolute_position(x, y, z):
cur_x ← x, cur_y ← y, cur_z ← z
absolute_color(r, g, b {, α}):
cur_r ← r, cur_g ← g, cur_b ← b, {cur_α ← α}
relative_normal(∆u, ∆v):
flip_u[6] = { 4, 5, 3, 2, 0, 1 }
flip_v[6] = { 2, 4, 1, 1, 2, 4 }
flip_uv[6] = { 2, 3, 0, 1, 5, 4 }
The contents of the norms[] table is exactly specified, and the next revision of this
specification will contain an exact listing of the values.
normal(n):
current_normal ← n, normal_override ← 1
color(c):
current_color ← c, color_override ← 1
current_position ← p,
if (bnv) current_normal ← n,
if (bcv) current_color ← c,
output_vertex(rep, current_position, current_normal,
current_color)
if (push) mesh_buffer[mesh_index].position ← p
current_position ←
mesh_buffer[(mesh_index - i - 1) & 15].position
if (bnv && !normal_override)
current_color ← mesh_buffer[(mesh_index - i - 1) & 15].color
if (bcv && !color_override)
current_color ← mesh_buffer[(mesh_index - i - 1) & 15].color
normal_override ← 0, color_override ← 0
output_vertex(rep, current_position, current_normal,
current_color)
bnv ← new_bnv,
bcv ← new_bcv,
cap ← new_cap,
normal_override ← 0, color_override ← 0
nop(length):
(null)
output_vertex(restart_reverse, newv):
newest ← newv, number_of_vertices ← 1, rev = 1
output_vertex(restart, newv):
newest ← newv, number_of_vertices ← 1, rev = 0
output_vertex(replace_middle, newv):
if (number_of_vertices < 2)
middle ← newest, newest ← newv, number_of_vertices++
else if (number_of_vertices < 3)
oldest ← middle, middle ← newest, newest ← newv,
number_of_vertices++,
intermediate_triangle(restart, oldest, middle, newest)
else if (number_of_vertices == 3)
middle ← newest, newest ← newv,
intermediate_triangle(restart, oldest, middle, newest)
output_vertex(replace_oldest, newv):
if (number_of_vertices < 2)
middle ← newest, newest ← newv, number_of_vertices++
else if (number_of_vertices < 3)
oldest ← middle, middle ← newest, newest ← newv,
number_of_vertices++,
intermediate_triangle(restart, oldest, middle, newest)
else if (number_of_vertices == 3)
oldest ← middle, middle ← newest, newest ← newv,
rev = 1 - rev,
intermediate_triangle(restart, oldest, middle, newest)
if (!rev)
final_triangle(v1.position, v1.normal, v1.color,
v2.position, v2.normal, v2.color,
v3.position, v3.normal, v3.color)
else if (rev)
final_triangle(v2.position, v2.normal, v2.color,
v1.position, v1.normal, v1.color,
v3.position, v3.normal, v3.color)
B.15.3 Position
form corresponding to an offset to the center of the bounding box, and an inverse
scale by the half length of the longest side of the bounding box are created as a pro-
logue for the geometry data. Note that in practice a little more care must be taken.
The greatest positive value is actually ( 2 n – 1 – 1 ) ⁄ 2 n – 1 , when positions are quan-
tized to n bits. By symmetry, the smallest negative value allowed is
– ( 2 n – 1 – 1 ) ⁄ 2 n – 1 . The value –1 (only sign bit set, all other bits 0) is explicitly not
allowed. Thus when computing the scale factor (and center) that will normalize the
geometry, the actual representation range needs to be taken into account.
B.15.4 Normals
Fold the XYZ components of the normal to the positive (prime) octant
If an XYZ component of the normal is negative, invert it and save the original sign
bits as a three-bit octant value. It is important when compressing to always strip the
sign bits off first before applying sextant folding, and to reverse the process when
decompressing. Note that the octant bits read left to right: the upper bit is for the x-
axis, the middle for the y-axis, and the lowermost of the three is for the z-axis.
oct = 0;
if(nx < 0.0) oct |= 4, nx = -nx
if(ny < 0.0) oct |= 2, ny = -ny
if(nz < 0.0) oct |= 1, nz = -nz
sex = 0;
if (nx < ny) t = nx, nx = ny, ny = t, sex |= 1
if (nz < ny) t = ny, ny = nz, nz = t, sex |= 2
if (nx < nz) t = nx, nx = nz, nz = t, sex |= 4
B.15.5 Colors
The colors are assumed to be in a 0.0 to 0.9 representation to begin with.
Sextants 0 and 4, 1 and 5, and 2 and 3 share the U = 0 edge. When crossing this
boundary, ∆U becomes ~U – last_u. This will generate a negative cur_u value
during decompression, which causes the decompressor to invert cur_u and look up
the new sextant in a table.
Sextants 0 and 2, 1 and 3, and 4 and 5 share the U + V = 64 edge. ∆U becomes 64
– U – last_u and ∆V becomes 64 – V – last_v. When cur_u + cur_v > 64, the
decompressor sets cur_u = 64 – cur_u and cur_v = 64 – cur_v, and a table lookup
determines the new sextant.
Each sextant shares the V = 0 edge with its corresponding sextant in another octant.
When in sextants 1 or 5, the normal moves across the X-axis, across the Y-axis for
sextants 0 or 4, and across the Z-axis for sextants 2 or 3. ∆V becomes ~V – last_
v. The decompressor inverts a negative cur_v and performs a table lookup for a
mask to exclusive-OR with the current octant value.
Note: When using the normal wrapping rules, a subtle bug can be introduced due
to the ambiguity of normals on a shared edge between two sextants. The normal
encoding rules have unambiguous tie breaking rules to determine which octant and
sextant a given normal resides in. However, the wrapping rules assume by default
that a delta-ed normal is in the same sextant and octant as its predecessor if the delta
only landed on an edge. This is subtly different than the sextant and octant that the
encoding rules might have suggested. The proper procedure is to keep track of
which octant and sextant a decompressor would believe that the normals being gen-
erated would lie in, and when the normal to delta to lands on an edge of this region,
change its sextant and octant from the what the encoding rules suggested to be the
same as where it is now delta-ing from. This change in default encoding is permis-
sible because the rectilinear normal encoded by values on a sextant edge are iden-
tical no matter which sextant claims ownership.
Otherwise the normals cannot be delta encoded, and so the second (target) normal
must be represented by an absolute reference to its three octant, three sextant, and
2 N-bit U V addresses. This is the length to be histogrammed for this pair of nor-
mals.
input data, doing this normal perturbation must be a policy choice of the compres-
sor itself, and subject to quality constraints of the user.
6. Same as level 5, but printed using hex numbers (proceeded with the 0x
suffix).
7. Un-delta’d. Like level 3, but relative values have had the running total
added to them, to show what the current full value is. Absolute values are
unchanged from level 3. To differentiate level 4, an ‘A’ suffix is added to
the lengthening opcode name.
8. Same as level 7, but printed using hex numbers (proceeded with the 0x
suffix).
9. Floating point. While up to now all values have been subsets of 16-bit
integers, before conversion to integer and quantization, most values were
floating point numbers in the 0 to 1.0 or −1.0 to 1.0 range. Level 5 shows
the values as floating point numbers, but it must be cautioned that these
data fields, while similar to the input un-compressed un-quantized values,
will usually be slightly different in value than the original data. This
floating point output format is primarily included as a convenience when a
user wants to understand the data closer to the original space.
10. Same as level 9, but non floating-point numbers printed using hex numbers
(preceeded with the 0x suffix).
Once again while the dissembler supports all 10 levels of output options, the
assembler only supports levels 1 and 2.
The syntax is fairly simple. Because the setting Colors or Normals can either be
stand-alone instructions, or components of a vertex instruction, parenthetic instruc-
tion grouping (lisp style) are used to make the ownership of arguments clear.
As an example, below is the disassembly (print level 1) of a four sided pyramid:
(nop 0)
(setTable Position 32-47 2 4 Rel)
(setTable Position 56-63 3 4 Rel)
(setTable Position 0-31 12 4 Rel)
(setTable Position 48-55 12 4 Abs)
(setTable Normal 0-31 5 0 Rel)
(setTable Normal 32-63 6 0 Abs)
(setTable Color 32-63 2 8 Rel)
(setTable Color 0-31 8 8 Abs)
(setState normalsBundled colorsUnbundled alphaUnbundled)
(setState normalsBundled colorsUnbundled alphaUnbundled)
(setColor 0 127 51 12)
(setState normalsBundled colorsUnbundled alphaUnbundled)
(setColor 32 0 0 0)
(vertex RST (Position 48 -2047 -2047 -205)
(Normal 32 4 --+ 44 0))
Rule 2: Beginnings
Every Compressed Geometry sequence starts with the body field of a nop in-
struction. Initial process proceeds as if a forwarded header of a nop instruction
had just been seen. The length field of this nop instruction body can be of any
legal length, though usually by convention the length field is 0, and thus the
first body consists of five zeros.
Rule 3: Endings
The last header in a Compressed Geometry sequence is a nop. This is followed
by the body of the next to last instruction. This next to last instruction can be
any instruction, and its body can be of any valid length for that instruction type,
but the body must end on a four byte 32-bit word aligned boundary. To achieve
this usually the next to last, and possible the next to next to last instruction(s)
are also nops, with lengths chosen to satisfy the ending requirement. Note that
the body for the last instruction (the nop) is not present in the Compressed Ge-
ometry sequence. The end of the Compressed Geometry is determined from a
separately specified size outside of the Compressed Geometry proper. Note
that this ending convention is symmetrical with the starting convention; the se-
quential concatenation of two valid Compressed Geometry objects is also a
valid Compressed Geometry object. For hardware, after a valid Compressed
Geometry object has been executed, another valid Compressed Geometry can
be executed without any pipeline flushes if desired.
Rule 6: No Defaults
All state used in the processing of Compressed Geometry must be defined be-
fore it is used; there are no implicit defaults for any of the state. The state in-
clude the contents of the decompression tables as defined by the setTable
instruction, the three bundling bits as defined by the setState instruction, the
contents of the mesh buffer as defined by vertex instructions with push en-
abled, and the current position, normal, and color (and alpha), as defined by ab-
solute settings in vertex instructions, setNormal instructions, and setColor
instructions. Note that this does not mean that all possible state needs to be de-
fined within a Compressed Geometry object. For example, only those portions
of the decompression tables actually referenced by a vertex or setNormal or
setColor instruction need be initialized first. The bits specified by setState al-
ways need to be referenced, unless there are no vertex instructions, which
would only occur in a geometry-less Compressed Geometry object. Mesh buff-
er elements need only be defined if they are accessed by mesh buffer reference
instructions. The current normal and the current color (and alpha) are special
cases; if they are not used within a Compressed Geometry object they may not
need to be initialized depending on the semantics of the outer incorporating
graphics API.
Specifically in a valid Compressed Geometry sequence no relative values for
positions, normals, colors (or alpha) may appear in a vertex or setNormal or
setColor instruction until after an absolute value has appeared for that particu-
lar item. There is no inheritance between different Compressed Geometry ob-
jects, each must be entirely stand-alone when it comes to state.
point value for the current R, G, B (and sometimes α) color state. Only positive
values are valid for these fields.
head position and orientation contributes little to a camera model’s camera posi-
tion and orientation; however, it does affect the projection matrix.
From a camera-based perspective, the application developer must construct the
camera’s position and orientation by combining the virtual-world component (the
position and orientation of the magic carpet) and the physical-world component
(the user’s instantaneous head position and orientation).
Java 3D’s view model incorporates the appropriate abstractions to compensate
automatically for such variability in end-user hardware environments.
LCC
Image Plate Coexistence
RCC Virtual
ViewPlatform Vworld
Fishtank Mode
The Left Image Plate and Right Image Plate Coordinate Systems
The left image plate and right image plate coordinate systems correspond with
the physical coordinate system of the image generator associated with the left
and right eye, respectively. The image plate is defined as having its origin at the
lower left-hand corner of the display area and lying in the display area’s XY
plane. Note that the left image plate’s XY plane does not necessarily lie parallel
to the right image plate’s XY plane. Note that left image plate and right image
plate are different coordinate systems than the room-mounted display environ-
ment’s image plate coordinate system.
Head
Methods
These methods set and retrieve a flag specifying whether to enable the use of six-
degrees-of-freedom tracking hardware.
These methods set and retrieve a flag that specifies whether or not to repeatedly
generate the user-head-to-vworld transform (initially false).
This method returns a string that contains the values of this View object.
Methods
These two methods set and retrieve the current policy for view computation. The
policy variable specifies how Java 3D uses its transforms in computing new
viewpoints, as follows:
• SCREEN_VIEW: Specifies that Java 3D should compute new viewpoints
using the sequence of transforms appropriate to nonattached, screen-based
head-tracked display environments, such as fishtank VR, multiple-projec-
tion walls, and VR desks. This is the default setting.
These methods set and retrieve the current screen scale policy.
These methods set and retrieve the screen scale value. This value is used when
the screen scale policy is SCALE_EXPLICIT.
Constants
This variable tells Java 3D that it should modify the eyepoint position so it is
located at the appropriate place relative to the window to match the specified
field of view. This implies that the view frustum will change whenever the appli-
cation changes the field of view. In this mode, the eye position is read-only. This
is the default setting.
This variable tells Java 3D to interpret the eye’s position relative to the entire
screen. No matter where an end user moves a window (a Canvas3D), Java 3D
continues to interpret the eye’s position relative to the screen. This implies that
the view frustum changes shape whenever an end user moves the location of a
window on the screen. In this mode, the field of view is read-only.
This variable specifies that Java 3D should interpret the eye’s position informa-
tion relative to the window (Canvas3D). No matter where an end user moves a
window (a Canvas3D), Java 3D continues to interpret the eye’s position relative
to that window. This implies that the frustum remains the same no matter where
the end user moves the window on the screen. In this mode, the field of view is
read-only.
This variable specifies that Java 3D should interpret the fixed eyepoint position in
the view as relative to the origin of coexistence coordinates. This eyepoint is
transformed from coexistence coordinates to image plate coordinates for each
Canvas3D. As in RELATIVE_TO_SCREEN mode, this implies that the view frustum
shape will change whenever a user moves the location of a window on the
screen.
Methods
This variable specifies how Java 3D handles the predefined eyepoint in a non-
head-tracked application. The variable can contain one of four values:
RELATIVE_TO_FIELD_OF_VIEW, RELATIVE_TO_SCREEN, RELATIVE_TO_WINDOW, or
RELATIVE_TO_COEXISTENCE. The default value is RELATIVE_TO_FIELD_OF_
VIEW.
Constants
These constants specify the monoscopic view policy. The first constant specifies
that the monoscopic view should be the view as seen from the left eye. The sec-
ond constant specifies that the monoscopic view should be the view as seen from
the right eye. The third constant specifies that the monoscopic view should be the
view as seen from the “center eye,” the fictional eye half-way between the left
and right eyes. This is the default setting.
Methods
Constants
These constants set the visibility policy for this view. The first constant specifies
that only visible objects are drawn (this is the default). The second constant spec-
ifies that only invisible objects are drawn. The third constant specifies that both
visible and invisible objects are drawn.
Methods
These methods set and retrieve the visibility policy for this view. The policy can
be one of VISIBILITY_DRAW_VISIBLE, VISIBILITY_DRAW_INVISIBLE, or
VISIBILITY_DRAW_ALL. The default visibility policy is VISIBILITY_DRAW_VISI-
BLE.
These methods set and retrieve the coexistenceCentering enable flag. If the coex-
istenceCentering flag is true, the center of coexistence in image plate coordi-
nates, as specified by the trackerBaseToImagePlate transform, is translated to the
center of either the window or the screen in image plate coordinates, according
to the value of windowMovementPolicy.
By default, coexistenceCentering is enabled. It should be disabled if the tracker-
BaseToImagePlate calibration transform is set to a value other than the identity
(for example, when rendering to multiple screens or when head tracking is
enabled). This flag is ignored for HMD mode, or when the coexistenceCenterIn-
PworldPolicy is not NOMINAL_SCREEN.
These methods set and retrieve the position of the manual right and left eyes in
coexistence coordinates. These values determine eye placement when a head
tracker is not in use and the application is directly controlling the eye position in
coexistence coordinates. These values are ignored when in head-tracked mode or
when the windowEyepointPolicy is not RELATIVE_TO_COEXISTENCE.
The first method takes the sensor’s last reading and generates a sensor-to-vworld
coordinate system transform. This Transform3D object takes points in that sen-
sor’s local coordinate system and transforms them into virtual world coordinates.
The next two methods retrieve the specified sensor’s last hotspot location in vir-
tual world coordinates.
TransformGroup TG
Physical Physical
Body Environment
TransformGroup TG
Canvas3D Screen3D
Canvas3D Screen3D
Physical Physical
Body Environment
Measured Parameters
These calibration parameters are set once, typically by a browser, calibration pro-
gram, system administrator, or system calibrator, not by an applet.
These methods store the screen’s (image plate’s) physical width and height in
meters. The system administrator or system calibrator must provide these values
by measuring the display’s active image width and height. In the case of a head-
mounted display, this should be the display’s apparent width and height at the
focal plane.
This method returns a status flag indicating whether scene antialiasing is avail-
able.
These methods set and retrieve the position of the manual left and right eyes in
image plate coordinates. These values determine eye placement when a head
tracker is not in use and the application is directly controlling the eye position in
image plate coordinates. In head-tracked mode or when the windowEyepoint-
Policy is RELATIVE_TO_FIELD_OF_VIEW or RELATIVE_TO_COEXISTENCE, this
value is ignored. When the windowEyepointPolicy is RELATIVE_TO_WINDOW,
only the Z value is used.
public void getLeftEyeInImagePlate(Point3d position)
public void getRightEyeInImagePlate(Point3d position)
public void getCenterEyeInImagePlate(Point3d position)
These methods retrieve the actual position of the left eye, right eye, and center
eye in image plate coordinates and copy that value into the object provided. The
center eye is the fictional eye half-way between the left and right eye. These
three values are a function of the windowEyepointPolicy, the tracking enable
flag, and the manual left, right, and center eye positions.
These methods compute the position of the specified AWT pixel value in image
plate coordinates and copy that value into the object provided.
This method projects the specified point from image plate coordinates into AWT
pixel coordinates. The AWT pixel coordinates are copied into the object pro-
vided.
These methods retrieve the physical width and height of this canvas window, in
meters.
These methods set and retrieve the policy regarding how Java 3D generates
monoscopic view. If the policy is set to View.LEFT_EYE_VIEW, the view gener-
ated corresponds to the view as seen from the left eye. If set to View.RIGHT_
EYE_VIEW, the view generated corresponds to the view as seen from the right eye.
If set to View.CYCLOPEAN_EYE_VIEW, the view generated corresponds to the view
as seen from the “center eye,” the fictional eye half-way between the left and
right eye. The default monoscopic view policy is View.CYCLOPEAN_EYE_VIEW.
Note: For backward compatibility with Java 3D 1.1, if this attribute is set to its
default value of View.CYCLOPEAN_EYE_VIEW, the monoscopic view policy in the
View object will be used. An application should not use both the deprecated View
method and this Canvas3D method at the same time.
Constructors
public PhysicalBody()
Constructs a default user PhysicalBody object with the following default eye and
ear positions:
Parameter Default Value
leftEyePosition (–0.033, 0.0, 0.0)
rightEyePosition (0.033, 0.0, 0.0)
leftEaPosition (–0.080, –0.030, 0.095)
rightEarPosition (0.080, –0.030, 0.095)
nominal eye height from ground 1.68
nominal eye offset from nominal 0.4572
screen
head to head tracker transform identity
These methods construct a PhysicalBody object with the specified eye and ear
positions.
Methods
These methods set and retrieve the position of the center of rotation of a user’s
left and right eyes in head coordinates.
These methods set and retrieve the position of the user’s left and right ear posi-
tions in head coordinates.
These methods set and retrieve the user’s nominal eye height as measured from
the ground to the center eye in the default posture. In a standard computer moni-
tor environment, the default posture would be seated. In a multiple-projection
display room environment or a head-tracked environment, the default posture
would be standing.
These methods set and retrieve the offset from the center eye to the center of the
display screen. This offset distance allows an “over the shoulder” view of the
scene as seen by the end user.
These methods set and retrieve the head-to-head-tracker coordinate system trans-
form. If head tracking is enabled, this transform is a calibration constant. If head
tracking is not enabled, this transform is not used. This transform is used in both
SCREEN_VIEW and HMD_VIEW modes.
This method returns a string that contains the values of this PhysicalBody object.
Constructors
public PhysicalEnvironment()
using this device. See Chapter 12, “Audio Devices,” for more details on the fields
and methods that set and initialize the device driver and output playback associ-
ated with the audio device.
Methods
The PhysicalEnvironment object specifies the following methods pertaining to
audio output devices and input sensors.
This method selects the specified AudioDevice object as the device through
which audio rendering for this PhysicalEnvironment will be performed.
These methods add and remove an input device to or from the list of input
devices.
These methods set and retrieve the count of the number of sensors stored within
the PhysicalEnvironment object. It defaults to a small number of sensors. It
should be set to the number of sensors available in the end-user’s environment
before initializing the Java 3D API.
This method returns a status flag indicating whether or not tracking is available.
The first method sets the sensor specified by the index to the sensor provided.
The second method retrieves the specified sensor.
These methods set and retrieve the index of the dominant hand.
These methods set and retrieve the index of the nondominant hand.
These methods set and retrieve the index of the head, right hand, and left hand.
The index parameter refers to the sensor index.
These methods set and retrieve the physical coexistence policy used in this phys-
ical environment. This policy specifies how Java 3D will place the user’s eye-
point as a function of current head position during the calibration process.
Java 3D permits one of three values: NOMINAL_HEAD, NOMINAL_FEET, or NOMINAL_
SCREEN.
ent a virtual camera within a virtual scene, to manipulate some parameters of the
virtual camera’s lens (specify its field of view), and to specify the locations of
the near and far clipping planes.
Java 3D allows applications to enable compatibility mode for room-mounted,
non-head-tracked display environments, or to disable compatibility mode using
the following methods. Camera-based viewing functions are only available in
compatibility mode.
Methods
Note: Use of these view-compatibility functions will disable some of Java 3D’s
view model features and limit the portability of Java 3D programs. These methods
are primarily intended to help jump-start porting of existing applications.
View Frustum
The location of the near and far clipping planes allow the application program-
mer to specify which objects Java 3D should not draw. Objects too far away from
the current eyepoint usually do not result in interesting images. Those too close
to the eyepoint might obscure the interesting objects. By carefully specifying
near and far clipping planes, an application programmer can control which
objects the renderer will not be drawing.
From the perspective of the display device, the virtual camera’s image plane cor-
responds to the display screen. The camera’s placement, orientation, and field of
view determine the shape of the view frustum.
This is a utility method that specifies the position and orientation of a viewing
transform. It works very similarly to the equivalent function in OpenGL. The
inverse of this transform can be used to control the ViewPlatform object within
the scene graph. Alternatively, this transform can be passed directly to the View’s
VpcToEc transform via the compatibility-mode viewing functions (see
Section C.11.2.3, “Setting the Viewing Transform”).
The frustum method establishes a perspective projection with the eye at the apex
of a symmetric view frustum. The transform maps points from eye coordinates to
clipping coordinates. The clipping coordinates generated by the resulting trans-
form are in a right-handed coordinate system (as are all other coordinate systems
in Java 3D).
The arguments define the frustum and its associated perspective projection:
(left, bottom, -near) and (right, top, -near) specify the point on the near
clipping plane that maps onto the lower-left and upper-right corners of the win-
dow, respectively. The -far parameter specifies the far clipping plane. See
Figure C-8.
The perspective method establishes a perspective projection with the eye at the
apex of a symmetric view frustum, centered about the Z-axis, with a fixed field of
view. The resulting perspective projection transform mimics a standard camera-
based view model. The transform maps points from eye coordinates to clipping
coordinates. The clipping coordinates generated by the resulting transform are in
a right-handed coordinate system.
The arguments define the frustum and its associated perspective projection:
-near and -far specify the near and far clipping planes; fovx specifies the field
of view in the X dimension, in radians; and aspect specifies the aspect ratio of
the window. See Figure C-9.
top
left
bottom
near right
far
aspect = x/y
y
Θ x
fovx
zNear
zFar
top
left
bottom
View Volume
near far
These compatibility-mode methods specify a viewing frustum for the left and
right eye that transforms points in eye coordinates to clipping coordinates. If
compatibility mode is disabled, a RestrictedAccessException is thrown. In
monoscopic mode, only the left eye projection matrix is used.
T HE Java 3D API uses the standard Java exception model for handling errors
or exceptional conditions. In addition to using existing exception classes, such as
ArrayIndexOutOfBoundsException and IllegalArgumentException, Java 3D
defines several new runtime exceptions. These exceptions are thrown by various
Java 3D methods or by the Java 3D renderer to indicate an error condition of
some kind.
The exceptions defined by Java 3D, as part of the javax.media.j3d package, are
described in the following sections. They all extend RuntimeException and, as
such, need not be declared in the throws clause of methods that might cause the
exception to be thrown. This appendix is not an exhaustive list of all exceptions
expected for Java 3D. Additional exceptions will be added as the need arises.
D.1 BadTransformException
Indicates an attempt to use a Tranform3D object that is inappropriate for the
object in which it is being used. For example:
• Transforms that are used in the scene graph, within a TransformGroup
node, must be affine. They may optionally contain a nonuniform scale or a
shear, subject to other listed restrictions.
• All transforms in the TransformGroup nodes above a ViewPlatform object
must be congruent. This ensures that the Vworld-coordinates-to-ViewPlat-
form-coordinates transform is angle- and length-preserving with no shear
and only uniform scale.
• Most viewing transforms other than those in the scene graph can only con-
tain translation and rotation.
Constructors
public BadTransformException()
public BadTransformException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
D.2 CapabilityNotSetException
This exception indicates an access to a live or compiled Scene Graph object
without the required capability set.
Constructors
public CapabilityNotSetException()
public CapabilityNotSetException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
D.3 DanglingReferenceException
This exception indicates that during a cloneTree call, an updated reference was
requested for a node that did not get cloned. This occurs when a subgraph is
duplicated via cloneTree and has at least one leaf node that contains a reference
to a node with no corresponding node in the cloned subgraph. This results in two
leaf nodes wanting to share access to the same node.
If dangling references are to be allowed during the cloneTree call, cloneTree
should be called with the allowDanglingReferences parameter set to true.
Constructors
public DanglingReferenceException()
public DanglingReferenceException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
D.4 IllegalRenderingStateException
This exception indicates an illegal state for rendering. This includes:
• Lighting without specifying normals in a geometry array object
• Texturing without specifying texture coordinates in a geometry array ob-
ject
public illegalRenderingStateException()
public illegalRenderingStateException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
D.5 IllegalSharingException
This exception indicates an illegal attempt to share a scene graph object. For
example, the following are illegal:
• Referencing a shared subgraph in more than one virtual universe
• Using the same component object both in the scene graph and in an imme-
diate-mode graphics context
• Including an unsupported type of leaf node within a shared subgraph
• Referencing a BranchGroup node in more than one of the following ways:
• Attaching it to a (single) Locale
• Adding it as a child of a Group node within the scene graph
• Referencing it from a (single) Background leaf node as background geometry
Constructors
public IllegalSharingException()
public IllegalSharingException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
D.6 MismatchedSizeException
This exception indicates that an operation cannot be completed properly because
of a mismatch in the sizes of the object attributes.
public MismatchedSizeException()
public MismatchedSizeException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
D.7 MultipleParentException
This exception extends IllegalSharingException and indicates an attempt to
add a node that is already a child of one group node into another group node.
Constructors
public MultipleParentException()
public MultipleParentException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
D.8 RestrictedAccessException
This exception indicates an attempt to access or modify a state variable without
permission to do so. For example, invoking a set method for a state variable that
is currently read-only.
Constructors
public RestrictedAccessException()
public RestrictedAccessException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
D.9 SceneGraphCycleException
This exception indicates that one of the live scene graphs attached to a viewable
Locale has a cycle in it. Java 3D scene graphs are directed acyclic graphs and, as
such, do not permit cycles. This exception is either thrown by the Java 3D ren-
derer at scene graph traversal time or when a scene graph containing a cycle is
made live (added as a descendant of a Locale object).
Constructors
public SceneGraphCycleException()
public SceneGraphCycleException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
D.10 SingularMatrixException
This exception, in the javax.vecmath package, indicates that the inverse of a
matrix cannot be computed.
Constructors
public SingularMatrixException()
public SingularMatrixException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
D.11 SoundException
This exception indicates a problem in loading or playing a sound sample.
Constructors
public SoundException()
public SoundException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
T HIS appendix contains the Java 3D equations for fog, lighting, sound, and
texture mapping. Many of the equations use the following symbols:
⋅ Multiplication
• Function operator for sound equations,
Dot product for all other equations
C′ = C ⋅ f + C f ⋅ ( 1 – f ) (E.1)
The fog coefficient, f, is computed differently for linear and exponential fog. The
equation for linear fog is as follows:
B–z
f = ------------- (E.2)
B–F
f = e –d ⋅ z (E.3)
The parameters used in the fog equations are as follows:
diff i = ( L i • N ) ⋅ Lc i ⋅ Md (E.5)
shin
spec i = ( S i • N ) ⋅ Lc i ⋅ Ms (E.6)
2
atten i = 1 ⁄ ( K c i + K l i ⋅ d i + K q i ⋅ d i ) (E.7)
exp i
spot i = max ( ( – L i ⋅ D i ) ,0 ) (E.8)
Note: If the vertex is outside the spot light cone, as defined by the cutoff angle,
spoti is set to 0. For directional and point lights, spoti is set to 1.
This is a subset of OpenGL in that the Java 3D ambient and directional lights are
not attenuated and only ambient lights contribute to ambient lighting.
The parameters used in the lighting equation are as follows:
E = Eye vector
Ma = Material ambient color
Md = Material diffuse color
Me = Material emissive color
Ms = Material specular color
N = Vertex normal
shin = Material shininess
2. Implementations that do not have a separate ambient and diffuse color may
fall back to using an ambient intensity as a percentage of the diffuse color.
This ambient intensity should be calculated using the NTSC luminance
equation:
Ec = Vc
(E.10)
Ef = Vt + P
where
De π
P = ------- --- – ( γ – α )
2 2
De Va
γ
α
Vt
Dh Vh
Vc
2. The signals from the sound source reach both ears by indirect paths around
the head ( sinα < De ⁄ 2Dh ); see Figure E-2:
Ec = Vt + P′
(E.11)
Ef = Vt + P
where
De π
P = ------- --- – ( γ – α )
2 2
De π
P' = ------- --- – ( γ + α )
2 2
The time from the sound source to the closest ear is Ec ⁄ S , and the time from the
sound source to the farthest ear is Ef ⁄ S , where S is the current AuralAttribute
region’s speed of sound.
If the sound is closest to the left ear, then
IT D l = Ec ⁄ S
(E.12)
IT D r = Ef ⁄ S
De γ Va
α
γ Dh Vh Vt
Vt
P'
reo sound image. Each equation below is calculated separately for the left and
right ear.
numS
G i = Gi i ⋅ Gd i ⋅ Ga i ⋅ Gr i (E.15)
Note: For BackgroundSound sources Gdi = Gai = 1.0. For PointSound sources
Gai = 1.0.
F i = Fd i • Fa i (E.16)
Note: For BackgroundSound sources Fdi and Fai are identity functions. For
PointSound sources Fai is an identity function.
If the sound source is on the right side of the head, Ec is used for left G and F
calculations and Ef is used for right. Conversely, if the Sound source is on the
left side of the head, Ef is used for left calculations and Ec is used for right.
Attenuation
For sound sources with a single distanceGain array defined, the intersection
points of Vh (the vector from the sound source position through the listener’s
position) and the spheres (defined by the distanceGain array) are used to find the
index k where dk ≤ L ≤ dk+1. See Figure E-3.
For ConeSound sources with two distanceGain arrays defined, the intersection
points of Vh and the ellipsi (defined by both the front and back distanceGain
arrays) closest to the listener’s position are used to determine the index k. See
Figure E-4.
The equation for the distance gain is
( Gd k + 1 – Gd k ) ⋅ ( d 2 – d 1 )
Gd = Gd k + --------------------------------------------------------------- (E.17)
L – d1
Vh A = (d , Gd )
k k
B = (dk+1, Gdk+1)
B C = (αk, Gak)
D D = (αk+1, Gak+1)
Listener
A
C
α
A = (d1, Gdk)
B = (d2, Gdk+1)
C = (αk, Gak)
D = (αk+1, Gak+1)
Vh
B
frontDistanceAttenuation[] D
A Listener
C
α
backDistanceAttenuation[] frontDistanceAttenuation[]
Angular attenuation for both the spherical and elliptical cone sounds is identical.
The angular distances in the attenuation array closest to α are found and define
the index k into the angular attenuation array elements. The equation for the
angular gain is
( Ga k + 1 – Ga k ) ⋅ ( α k + 1 – α k )
Ga = Ga k + ---------------------------------------------------------------------- (E.18)
α – αk
Filtering
Similarly, the equations for calculating the AuralAttributes distance filter and the
ConeSound angular attenuation frequency cutoff filter are
( Fd k + 1 – Fd k ) ⋅ ( d 2 – d 1 )
Fd = Fd k + -------------------------------------------------------------
- (E.19)
L – d1
( Fa k + 1 – Fa k ) ⋅ ( α k + 1 – α k )
Fa = Fa k + --------------------------------------------------------------------
- (E.20)
α – αk
An N-pole lowpass filter may be used to perform the simple angular and distance
filtering defined in this version of Java 3D. These simple lowpass filters are
meant only as an approximation for full, FIR filters (to be added in some future
version of Java 3D).
f′ = f (E.21)
If there has been a change in the distance between the head and the sound, the
Doppler effect equation is as follows:
f′ = f ⋅ Af ⋅ v (E.22)
When the head and sound are moving towards each other (the velocity ratio is
greater than 1.0), the velocity ratio equation is as follows:
( S ⋅ Ar ) + ( ∆v ( h, t ) ⋅ Av )
v = ------------------------------------------------------------- (E.23)
( S ⋅ Ar ) – ( ∆v ( s, t ) ⋅ Av )
When the head and sound are moving away from each other (the velocity ratio is
less than 1.0), the velocity ratio equation is as follows:
( S ⋅ Ar ) – ( ∆v ( h, t ) ⋅ Av )
v = ------------------------------------------------------------- (E.24)
( S ⋅ Ar ) + ( ∆v ( s, t ) ⋅ Av )
Note: If the adjusted velocity of the head or the adjusted velocity of the sound is
greater than the adjusted speed of sound, f′ is undefined.
∑
j
Ri = [ ( Gr ⋅ Sample ( t ) i ) • D ( t + ( Tr ⋅ j ) ) ] (E.26)
j
Note that the reverberation calculation outputs the same image to both left and
right output signals (thus there is a single monaural calculation for each sound
reverberated). Correct first-order (early) reflections, based on the location of the
sound source, the listener, and the active AuralAttribute’s bounds, are not
required for this version of Java 3D. Approximations based on the reverberation
delay time, either suppled by the application or calculated as the average delay
time within the selected AuralAttribute’s application region, will be used.
The feedback loop is repeated until AuralAttribute’s reverberation feedback loop
count is reached or Grj ≤ 0.000976 (effective zero amplitude, –60 dB, using the
measure of –6 dB drop for every doubling of distance).
D = Delay function
fLoop = Reverberation feedback loop count
Gr = Reverberation coefficient acting as a gain scale-factor
I = Stereo image of unreflected sound sources
R = Reverberation for each sound sources
Sample = Sound digital sample with a specific sample rate, bit precision,
and an optional encoding and/or compression format
t = Time
Tr = Reverberation delay time (approximating first-order delay in the
AuralAttribute region)
I′ ( t ) l = I ( t ) l + [ D ( t ) • [ G ( P, α ) ⋅ I ( t ) r ] ] (E.27)
I′ ( t ) r = I ( t ) r + [ D ( t ) • [ G ( P, α ) ⋅ I ( t ) l ] ] (E.28)
The parameters used in the cross-talk equations, expanding on the terms used for
the equations for headphone playback, are as follows:
u = s ⋅ width
(E.29)
v = t ⋅ height
i = trunc ( u )
(E.30)
j = trunc ( v )
Ct = T i, j (E.31)
If the texture boundary mode is REPEAT, then only the fractional bits of s and t
are used, ensuring that both s and t are less than 1.
If the texture boundary mode is CLAMP, then the s and t values are clamped to be
in the range [0, 1] before being mapped into u and v values. Further, if s ≥ 1, then
i is set to width – 1; if t ≥ 1, then j is set to height – 1.
The parameters in the point-sampled texture lookup equations are as follows:
The above equations are used when the selected texture filter function—either
the minification or the magnification filter function—is BASE_LEVEL_POINT.
Java 3D selects the appropriate texture filter function based on whether the tex-
ture image is minified or magnified when it is applied to the polygon. If the tex-
ture is applied to the polygon such that more than one texel maps onto a single
pixel, then the texture is said to be minified and the minification filter function is
selected. If the texture is applied to the polygon such that a single texel maps
onto more than one pixel, then the texture is said to be magnified and the magni-
fication filter function is selected. The selected function is one of the following:
BASE_LEVEL_POINT, BASE_LEVEL_LINEAR, MULTI_LEVEL_POINT, or MULTI_
LEVEL_LINEAR. In the case of magnification, the filter will always be one of the
two base level functions (BASE_LEVEL_POINT or BASE_LEVEL_LINEAR).
If the selected filter function is BASE_LEVEL_LINEAR, then a weighted average of
the four texels that are closest to the sample point in the base level texture image
is computed.
i 0 = trunc ( u – 0.5 )
j 0 = trunc ( v – 0.5 )
(E.32)
i1 = i0 + 1
j1 = j0 + 1
α = frac ( u – 0.5 )
(E.33)
β = frac ( v – 0.5 )
Ct = ( 1 – α ) ⋅ ( 1 – β ) ⋅ T i0, j0 + α ⋅ ( 1 – β ) ⋅ T i1, j0
(E.34)
+ ( 1 – α ) ⋅ β ⋅ T i0, j1 + α ⋅ β ⋅ T i1, j1
Mipmapping is the most common filtering technique for handling multiple levels
of detail. If the implementation uses mipmapping, the equations for computing a
texture color based on texture coordinates are simply those used by the underly-
ing rendering API (such as OpenGL or PEX). Other filtering techniques are pos-
sible as well.
C′ = Ct (E.35)
C′ = C ⋅ Ct (E.36)
Note that if the texture format is INTENSITY, alpha is computed identically to red,
green, and blue:
C′ α = C α ⋅ ( 1 – Ct α ) + Cb α ⋅ Ct α (E.39)
C = Color of the pixel being texture mapped (if lighting is enabled, then
this does not include the specular component)
Ct = Texture color
Cb = Blend color
Note that Crgb indicates the red, green, and blue channels of color C and that Cα
indicates the alpha channel of color C. This convention applies to the other color
variables as well.
If there is no alpha channel in the texture, a value of 1 is used for Ctα in BLEND
and DECAL modes.
When the texture mode is one of REPLACE, MODULATE, or BLEND, only certain of
the red, green, blue, and alpha channels of the pixel color are modified, depend-
ing on the texture format, as described below.
• INTENSITY: All four channels of the pixel color are modified. The inten-
sity value is used for each of Ctr, Ctg, Ctb, and Ctα in the texture applica-
tion equations, and the alpha channel is treated as an ordinary color
channel—the equation for C´rbg is also used for C´α.
• LUMINANCE: Only the red, green, and blue channels of the pixel color
are modified. The luminance value is used for each of Ctr, Ctg, and Ctb in
the texture application equations. The alpha channel of the pixel color is
unmodified.
• ALPHA: Only the alpha channel of the pixel color is modified. The red,
green, and blue channels are unmodified.
• LUMINANCE_ALPHA: All four channels of the pixel color are modified.
The luminance value is used for each of Ctr, Ctg, and Ctb in the texture ap-
plication equations, and the alpha value is used for Ctα.
• RGB: Only the red, green, and blue channels of the pixel color are modi-
fied. The alpha channel of the pixel color is unmodified.
• RGBA: All four channels of the pixel color are modified.
utils.applet Enables creating Java applets that can also run as stand-
alone applications.
utils.behaviors.keyboard Useful for controlling the scene graph behavior from the
keyboard.
F.5.1 Interfaces
Table F-4 lists the interfaces in the com.sun.j3d.loaders package.
Table F-4 loaders Package Interfaces
Interface Description
F.5.2 Classes
Table F-5 lists the classes in the com.sun.j3d.loaders package.
Table F-5 loaders Package Classes
Class Description
F.5.3 Exceptions
Table F-6 lists the exceptions in the com.sun.j3d.loaders package.
F.11.1 Interfaces
Table F-12 lists the interface in the com.sun.j3d.utils.behaviors.mouse pack-
age.
Table F-12 utils.behaviors.mouse Package Interface
Interface Description
F.11.2 Classes
Table F-13 lists the classes in the com.sun.j3d.utils.behaviors.mouse pack-
age.
Table F-13 utils.behaviors.mouse Package Classes
Class Description
F.16.1 Interfaces
Table F-18 lists the interface in the com.sun.j3d.utils.picking.behaviors
package.
Table F-18 utils.picking.behaviors Package Interfaces
Interface Description
F.16.2 Classes
Table F-19 lists the classes in the com.sun.j3d.utils.picking.behaviors pack-
age.
Table F-19 utils.picking.behaviors Package Classes
Class Description
T HIS appendix describes how to install the software on the CD-ROM that came
with this book. For descriptions of the program examples on the CD-ROM, see
Appendix H, “The Example Programs.”
The index.html file can be read with any browser. If any information in this
appendix contradicts the index.html file, assume that the index.html file is cor-
rect.
The CD-ROM also contains the following directories:
doc/ Contains the javadocs, and a browsable copy of this book
programs/ Contains example programs and demos
solaris/ Contains shell scripts for installing the Java 2 SDK, Java 3D,
OpenGL, and any necessary patches on Solaris systems
win32/ Contains executable files for installing the Java 2 SDK and
Java 3D on Windows systems
To install the Java 2 SDK and Java 3D on a Windows system, follow these steps:
1. If your system does not already have OpenGL installed, install it before
installing the CD-ROM (see Section G.2.1, “Requirements for Windows
Systems”).
2. Insert the CD-ROM in your CD-ROM drive.
If you do not get the java version “1.2.2” message, verify that the
jdk1.2.2/bin directory is in your path in Step 4 above. If you get the
above message, proceed to Section G.2.3, “Installing the Java 3D API on
Windows Systems.”
This script determines the version of Solaris that is running and installs all
necessary patches for that version.
4. Reboot your system.
To show the installed patches, type the following:
% showrev -p
If you want to install an individual patch, rather than the entire set of patches, pro-
ceed as follows:
1. Insert the CD-ROM in your CD-ROM drive.
Version 1.2, March 2000 579
G.3.4 Installing the Java 2 SDK on Solaris Systems CD-ROM INSTALLATION
2. su to root.
3. Run patchadd with the path to the patch you want to install.
For example:
% patchadd /cdrom/solaris/patches/5.7/105181-17
Note: You must have permission to write files in the directories where you want to
install Java and Java 3D. If you do not have this permission, the installation pro-
gram will run to completion but Java and Java 3D will not be installed.
% cd /home/myhome
If you do not get the java version “1.2.2” message, verify that you
have put the jdk1.2.2/bin directory in your path in Step 3 above. If you
do get the above message, proceed to Section G.3.5, “Installing the Java
3D API on Solaris Systems.”
To read the javadocs, point your browser to the index.html file in the newly-cre-
ated html directory.
T HIS appendix describes the example programs the on the CD-ROM. For infor-
mation on how to install the example programs, see Appendix G, “CD-ROM
Installation.”
H.1 Introduction
Before you can compile Java 3D applications or run the example programs, you
need to have installed or you need to install the following software on your system:
• Java 2 SDK version 1.2 or later (included on the CD-ROM)
• Java 3D API (included on the CD-ROM)
• OpenGL
The demo/java3d directory contains 37 subdirectories. All but two of these direc-
tories (geometry and images) contain at least one example program. Some direc-
tories contain several example programs.
Each example program consists of two files, a .java file and a .class file. For
example, the ConicWorld directory contains the .java and .class files for six
example programs: BoxExample, ConicWorld, FlipCylinder, SimpleCylinder,
TexturedCone, and TexturedSphere.
% java HelloUniverse
Several of the example programs can be run as applets, either within a browser
(using Java Plug-in) or by running the applet from within the Java utility program
called appletviewer.
On Windows, the Java Plug-in is automatically installed along with the Java 2 runt-
ime environment. Applets can be run in either Netscape Communicator or Internet
Explorer.
On Solaris, Java 3D applets can be run in Netscape Communicator 4.51 or later on
Solaris 2.6 or later. After installing Netscape Communicator, you need to install
Java Plug-in version 1.2 or later. Netscape Communicator and Java Plug-in may be
downloaded for free from:
http://www.sun.com/solaris/netscape/
In both cases (Windows and Solaris), Java 3D applets are run within the Java Plug-
in by opening the Java Plug-in version of the associated HTML file. These files are
of the form <demo-name>_plugin.html, where <demo-name> is the name of the
particular Java 3D applet. For example, to run the HelloUniverse example program
within a browser, open the HelloUniverse/HelloUniverse_plugin.html file in
your browser.
The following page contains links to all of the Java 3D demos that can be run as
applets:
<jdkhome>/demo/java3d/index.html
Just click on the link for a given demo to run that demo within your browser.
Some Java 3D applets require a larger heap size (memory pool) than the default
provided by Java Plug-in. To increase the heap size to 64 megabytes, run the Java
Plug-in Control Panel application (from the Start menu in the Programs section on
Windows) and set the “Java Run Time Parameters” to “–Xmx64m”.
% appletviewer HelloUniverse.html
Some Java 3D applets require a larger heap size (memory pool) than the default
provided by Java. To increase the maximum heap size to 64 megabytes, run
appletviewer with the “–J–Xmx64m” option. For example:
H.3.1 AWT_Interaction
Directory: demo/java3d/AWT_Interaction
The AWT_Interaction program displays a cube in a window. A “Rotate” button at
the top of the window rotates the cube in steps each time the button is selected. This
program demonstrates modifying the scene graph directly from the AWT event
thread using the ActionListener interface. This example is derived from the Hel-
loUniverse example, but instead of being continuously modified by a RotationIn-
terpolator behavior, the object’s TransformGroup is set to a new value that is
computed each time the “Rotate” button is pressed.
The relevant source code fragments from AWT_Interaction.java are shown
below.
The above code creates instance variables for the current angle, the Transform-
Group that will be modified, and a button that will trigger the modification. The
AWTInteraction class implements ActionListener so that it can receive button
press events. The createSceneGraph method (not shown here) creates a root
BranchGroup, an object TransformGroup, and a color cube, much as in HelloUni-
verse.
public AWTInteraction() {
...
Panel p = new Panel();
p.add(rotateB);
add("North", p);
rotateB.addActionListener(this);
...
}
The constructor puts the Rotate button in a panel and adds the AWTInteraction
class as an action listener for the button.
H.3.2 AlternateAppearance
Directory: demo/java3d/AlternateAppearance
The AlternateAppearanceBoundsTest and AlternateAppearanceScopeTest pro-
grams demonstrate the ability of the AlternateAppearance node to override the
appearance of Shape3D nodes. The programs display a 5 × 5 matrix of spheres and
a control panel. The control panel allows you to select different scopes, and appear-
ance colors. The AlternateAppearanceBoundsTest program allows you to choose
one of three different sizes of BoundingSpheres for the region of influence of the
AlternateAppearance node, select whether a bounds object or a bounding leaf is
used, and enable or disable the appearance override enable flag in each of the
objects. The AlternateAppearanceScopeTest program allows you to set the hierar-
chical scoping of the AlternateAppearance node and enable or disable the appear-
ance override enable flag in each object in a group of objects.
H.3.3 Appearance
Directory: demo/java3d/Appearance
The AppearanceTest program displays an image background and nine rotating tet-
rahedron primitives. The tetrahedra are constructed with different material proper-
ties. The relevant source code fragments from AppearanceTest.java are shown
below.
Version 1.2, March 2000 589
H.3.3 Appearance THE EXAMPLE PROGRAMS
switch (idx) {
...
case 4:
// Set up the texture map
TextureLoader tex =
new TextureLoader("../images/apimage.jpg", this);
app.setTexture(tex.getTexture());
For Appearance number 4, the TextureLoader utility is used to load a JPEG image
from a file and create a Texture object. TextureAttributes are set up so that the list
color is blended with the texture map (MODULATE). Finally, a lighting Material
object is created with a default color of white.
case 5:
// Set up the transparency properties
TransparencyAttributes ta = new TransparencyAttributes();
ta.setTransparencyMode(ta.BLENDED);
ta.setTransparency(0.6f);
app.setTransparencyAttributes(ta);
...
return app;
}
H.3.4 AppearanceMixed
Directory: demo/java3d/AppearanceMixed
The AppearanceMixed program displays the same image background and nine
rotating tetrahedra (with different material properties) as the AppearanceTest pro-
gram described above. It adds a pair of triangles that are drawn in immediate mode;
this immediate-mode rendering is mixed in with the retained-mode objects (the tet-
rahedra).
An application subclasses Canvas3D and overrides the renderField to render geom-
etry in mixed-immediate mode. The relevant source code fragments from the
MyCanvas3D class in AppearanceMixed.java are shown below.
The renderField method is called by Java 3D during the rendering loop for each
frame. The AppearanceMixed example overrides this Canvas3D method to com-
pute a new set of vertices and normals for the pair of triangles and to draw the tri-
angles to the canvas. The computeVert and computeNormals update the vert and
normals array and then call the methods to copy these changes to the IndexedTri-
angleArray object.
The constructor for MyCanvas creates a Graphics3D object and initializes its
appearance and lights. Note that even though the scene graph also contains light
objects, they are not used for immediate mode rendering—lights must be created
and added to the graphics context in order for immediate mode geometry to be lit.
H.3.5 Background
Directory: demo/java3d/Background
The BackgroundGeometry program demonstrates the use of background geometry.
The inside of a Sphere is texture mapped and used as background geometry, which
is rendered by Java 3D as if it were infinitely far away. The background is position-
and scale-invariant—only rotations affect how the geometry is rendered. This
demo demonstrates this with a group of boxes drawn in the virtual world. The
viewing platform can be adjusted with the mouse buttons. Notice how translations
do not affect the background, but rotations do.
H.3.6 Billboard
Directory: demo/java3d/Billboard
The Bboard and BboardPt programs demonstrate the use of Java 3D’s Billboard
behavior to rotate a billboard around the Y axis and around a fixed point, respec-
tively. Use the left mouse button to rotate the scene, the middle button to zoom, and
the right button to translate.
H.3.7 ConicWorld
Directory: demo/java3d/ConicWorld
This directory contains a README file and six demonstration programs:
• The ConicWorld program shows spheres, cylinders, and cones of different
resolutions and colors.
• The SimpleCylinder program demonstrates a simple cylinder object. The
left mouse button rotates the cylinder.
• The BoxExample program demonstrates a rotating texture-mapped box.
The left mouse button rotates the box, the middle button zooms.
• The FlipCylinder program puts up a textured cylinder that can be rotated
and zoomed with the mouse. The left mouse button rotates, the middle but-
ton zooms, and the right button translates.
• The TexturedCone and TexturedSphere programs demonstrate the use of
texture mapping with the Cone and Sphere primitives, respectively.
These programs demonstrate the use of the geometry primitives in the
com.sun.j3d.utils.geometry package.
H.3.8 FourByFour
Directory: demo/java3d/FourByFour
The FourByFour program only runs as an applet, either from a browser (using
FourByFour_plugin.html) or from appletviewer (using FourByFour.html).
FourByFour is a three-dimensional game of tic-tac-toe on a 4 × 4 × 4 cube. Once
loaded, press the “Instructions” button for information on how to play the game.
H.3.9 GearTest
Directory: demo/java3d/GearTest
The GearTest program shows a single rotating gear. The GearBox program shows
a rotating gear assembly with five gears and gear shafts. The entire gear assembly
can be manipulated with the mouse. The geometry in this example program is
mathematically computed, and demonstrates the use of different Java 3D geometry
types. The Gear, SpurGear, and Shaft classes contain most of the geometry creation
methods.
H.3.10 GeometryByReference
Directory: demo/java3d/GeometryByReference
The GeometryByReferenceTest program draws a pair of triangles using the new
geometry-by-reference API in the GeometryArray object. The geometry or color
data is modified when the corresponding item is selected from the “Update Data”
combo box.
The ImageComponentByReferenceTest program draws a small raster object in the
upper left corner and a larger texture mapped square, using the same image as a tex-
ture, in the middle of the window. This program demonstrates the new by-reference
API for passing image data to Java 3D. It also demonstrates the new Y-up versus
Y-down attribute for images. Use the combo boxes at the bottom of the screen to
select the desired mode for the raster image and the texture image.
The InterleavedTest program draws a pair of triangles using the new interleaved
geometry-by-reference API in the GeometryArray object.
H.3.11 GeometryCompression
Directory: demo/java3d/GeometryCompression
The cgview program loads an object from a compressed geometry resource (.cg)
file and displays it on the screen. The object can be manipulated with the mouse.
Run the program with the following command:
java cgview <.cg file> [object-number]
You can use one of the .cg resource files in the demo/java3d/geometry directory.
The following example will display a galleon (ship):
The obj2cg program reads one or more Wavefront .obj file, compresses them, and
appends the corresponding compressed objects to the specified compressed geom-
etry resource (.cg) file. Run the program with the following command:
java obj2cg <.obj file> [<.obj file> ...] <.cg file>
If the .cg file does not exist, it is created. If the file does exist and is a valid .cg
resource file, the new object(s) are appended to the objects in the existing file. If it
is not a valid .cg file, an exception is thrown.
The ObjectFileCompressor class provides the methods, used by obj2cg, to com-
press Wavefront .obj files into Java 3D CompressedGeometry nodes. The Object-
FileCompressor.html file is the javadoc that describes the methods.
H.3.12 HelloUniverse
Directory: demo/java3d/HelloUniverse
The HelloUniverse program creates a cube and a RotationInterpolator behavior
object that rotates the cube at a constant rate of π/2 radians per second. The code
for this program is described in Section 1.6.3, “HelloUniverse: A Sample Java 3D
Program.”
H.3.13 LOD
Directory: demo/java3d/LOD
The LOD program demonstrates the use of the DistanceLOD behavior to automat-
ically select from among four different resolutions of a shaded, lit sphere. The mid-
dle mouse button moves the object closer and farther away from the viewer. As the
object gets farther away from the viewer, successively lower-resolution versions of
the sphere are displayed.
H.3.14 Lightwave
Directory: demo/java3d/Lightwave directory
The Viewer program is a loader and viewer for Lightwave 3D scene files. This pro-
gram implements only a subset of the features in Lightwave 3D. The README.txt
file contains release notes for the loader. The Viewer program takes the name of a
Lightwave 3D scene (.lws) file as its only argument. For example:
H.3.15 ModelClip
Directory: demo/java3d/ModelClip
The ModelClipTest and ModelClipTest2 programs show model clipping. The
ModelClipTest program draws an object that is clipped by a model clipping plane.
The mouse can be used to manipulate the object. Note that the clipping plane
moves with the object. The ModelClipTest2 program has a fixed object with a mov-
able model clipping plane. The mouse can be used to manipulate the model clip-
ping plane.
H.3.16 Morphing
Directory: demo/java3d/Morphing
The Morphing program displays a hand at the bottom of the window that morphs
between the static views of the three hands at the top of the window. The
Pyramid2Cube program is a simpler example that morphs between three cubes.
H.3.17 ObjLoad
Directory: demo/java3d/ObjLoad
The ObjLoad program loads Wavefront object files. Run the program with the fol-
lowing command:
java ObjLoad [-s] [-n] [-t] [-c degrees] <.obj file>
where the options are as follows:
-s Spin (no user interaction)
-n No triangulation
-t No stripification
-c Set crease angle for normal generation (default is 60 without
smoothing group info, otherwise 180 within smoothing groups)
You can use one of the .obj files in the demo/java3d/geometry directory. The fol-
lowing example will display a galleon (ship):
objTrans.addChild(s.getSceneGroup());
The above code fragment creates an ObjectFile loader with the desired flags and
crease angle, loads the specified filename (checking for file and parsing excep-
tions), and adds the loaded objects to the scene graph. This code fragment could
easily be modified to accommodate a variety of loaders.
H.3.18 OffScreenCanvas3D
Directory: demo/java3d/OffScreenCanvas3D
The OffScreenTest program creates a scene graph containing a cube and renders
that cube to an on-screen Canvas3D. In the postSwap routine of the on-screen
Canvas3D, an off-screen rendering of the same scene is done to the off-screen
buffer. The resulting image is then used as the source image for a Raster object in
the scene graph.
The PrintFromButton program is similar to the OffScreenTest program, except that
it doesn’t automatically render to the off-screen buffer during the postSwap call-
back of its on-screen Canvas3D. The off-screen rendering is done when the “Print”
button is pressed.
H.3.19 OrientedShape3D
Directory: demo/java3d/OrientedShape3D
The OrientedTest and OrientedPtTest programs demonstrate the use of Java 3D’s
OrientedShape3D nodes to create geometry that is oriented about the Y axis and
around a fixed point, respectively. These are essentially the same example pro-
grams as used in the Billboard example, except that they use an OrientedShape3D
node rather than a Billboard behavior. Use the left mouse button to rotate the scene,
the middle button to zoom, and the right button to translate. Notice how the text
does not jump around like it does when using a Billboard behavior.
H.3.20 PackageInfo
Directory: demo/java3d/PackageInfo
The PackageInfo program lists the package information for the Java 3D packages
installed on the system. This can be used to determine which version of Java 3D
you are running.
The QueryProperties program lists the values of the properties returned by the que-
ryProperties method of the Canvas3D that is created from the preferred Graph-
icsConfiguration returned by SimpleUniverse.
H.3.21 PickTest
Directory: demo/java3d/PickTest
The PickTest program displays several 3D objects and a control panel. The control
panel allows the user to change the pick mode, pick tolerance, and the view mode.
You can pick and rotate objects with the mouse. The PickTest program demon-
strates the use of the PickMouseBehavior utility classes.
The IntersectTest program demonstrates the ability to get geometric intersection
information from a picked object. Use the mouse to pick a point on one of the
objects in the window. The program illuminates the picked location and the vertices
of the primitive with tiny spheres. Information about the picked primitive and the
point of intersection is printed. The IntersectTest program uses a mouse-based
behavior, IntersectInfoBehavior, to control the picking. The relevant source code
fragments from IntersectInfoBehavior.java are shown below.
PickCanvas pickCanvas;
PickResult[] pickResult;
if (eventId == MouseEvent.MOUSE_PRESSED) {
int x = ((MouseEvent)event[i]).getX();
int y = ((MouseEvent)event[i]).getY();
pickCanvas.setShapeLocation(x, y);
Point3d eyePos = pickCanvas.getStartPosition();
pickResult = pickCanvas.pickAllSorted();
if (pickResult != null) {
// Get node info
Node curNode = pickResult[0].getObject();
Geometry curGeom = ((Shape3D)curNode).getGeometry();
GeometryArray curGeomArray = (GeometryArray) curGeom;
// Get closest intersection results
PickIntersection pi =
pickResult[0].getClosestIntersection(eyePos);
The processStimulus method checks for a mouse event and initiates a pick, via the
PickCanvas object, at the selected location. It then looks for pick results and
extracts the intersection information from the pick result (if any).
H.3.22 PickText3D
Directory: demo/java3d/PickText3D
H.3.23 PlatformGeometry
Directory: demo/java3d/PlatformGeometry
The SimpleGeometry program displays two geometry objects: a sphere and a rotat-
ing cube. The sphere is created as PlatformGeometry using the universe utilities.
This means that it is always in a fixed location relative to the viewer.
H.3.24 PureImmediate
Directory: demo/java3d/PureImmediate
The PureImmediate program demonstrates Java 3D’s pure immediate mode. In this
mode, objects are not placed into a scene graph, but are instead drawn using the
GraphicsContext3D drawing methods. The Java 3D renderer for the Canvas into
which the immediate mode graphics are rendered must be stopped prior to imme-
diate mode rendering. In this mode, the rendering is done from a user-controlled
thread.
The relevant source code fragments from PureImmediate.java are shown below.
The above code creates instance variables for a Canvas3D, the GraphicsContext
associated with the canvas, a geometry object for drawing, a Transform3D object
for rotation, and an alpha object to allow the rotation to be time-based. The Pure-
Immediate class implements Runnable so that it can be run by a user-created draw-
ing thread.
// Set up geometry
cube = new ColorCube(0.4).getGeometry();
}
The render method renders a single frame. After ensuring that the graphics context
is set up and the geometry is created, it computes the new rotation matrix, clears
the canvas, draws the cube, and swaps the draw and display buffer. The run method
is the entry point for our drawing thread. It calls the render method in a loop, yield-
ing control to other threads once per frame.
public PureImmediate() {
<construct canvas3d, set layout of applet, create canvas>
canvas.stopRenderer();
add("Center", canvas);
The constructor creates the 3D canvas, stops the Java 3D renderer, sets up the view-
ing branch (necessary even in pure immediate mode) and starts up the drawing
thread.
H.3.25 ReadRaster
Directory: demo/java3d/ReadRaster
The ReadRaster program creates a scene graph containing a cube and renders that
cube. In the postSwap routine of the Canvas3D, the contents of the canvas are read
back using the Immediate mode readRaster method. The resulting image is then
used as the source image for a Raster object in the scene graph.
H.3.26 Sound
Directory: demo/java3d/Sound
SimpleSounds shows a rotating cube and plays three different sounds, including a
voice saying “Hello Universe.”
ReverberateSound demonstrates different amounts of reverberation. It plays a
voice saying “Hello Universe” in several different environments, including a
closet, acoustic lab, garage, dungeon (both medium and large), and a cavern.
H.3.27 SphereMotion
Directory: demo/java3d/SphereMotion
The SphereMotion program shows a lit sphere that is continuously moving closer
and farther from the viewer. The sphere is lit by two light sources, one fixed and
one rotating around the sphere. Depending on a command line option, the two light
sources are created as directional lights (the default), point lights or spot lights. Run
the program with the following command:
java SphereMotion [-dir | -point | -spot]
H.3.28 SplineAnim
Directory: demo/java3d/SplineAnim
H.3.29 Text2D
Directory: demo/java3d/Text2D
The Text2DTest program shows three different types of 2D text using the Text2D
utility class.
H.3.30 Text3D
Directory: demo/java3d/Text3D
The Text3DLoad program permits you to enter your own text and see how it dis-
plays in 3D. The command for running Text3DLoad is as follows:
java Text3DLoad [-f fontname] [-t tesselation] <text>
The fontname variable allows you to specify one of the Java Font names, such as
Helvetica, TimesRoman, and Courier. The tesselation variable specifies how
finely to tessellate the font glyphs. Once the text displays, the left mouse button
rotates the text, the middle button zooms, and the right button translates.
H.3.31 TextureByReference
Directory: demo/java3d/TextureByReference
The TextureByReference program shows a set of animating textures using the
image component by-reference feature. A control panel allows you to flip the
image or to set the texture and geometry by-reference flag, the image orientation
flag (y-up or y-down), and the image type. It also allows you to control the texture
animation speed and to stop and restart the animation.
H.3.32 TextureTest
Directory: demo/java3d/TextureTest
The TextureImage program displays a rotating cube with a user-specified image
file mapped onto the surface. The command for running TextureImage is as fol-
lows:
java TextureImage <image-filename> [-f ImageComponent format]
The ImageComponent format variable allows you to specify the format of the pixel
data. If you do not specify an image file, the rotating cube will appear white. You
can use one of the image files in the demo/java3d/images directory. For example:
will display the rotating cube with an image of earth mapped onto it.
H.3.33 TickTockCollision
Directory: demo/java3d/TickTockCollision
The TickTockCollision program shows an oscillating colored cube that collides
with two rectangular objects. The rectangular objects change color when they col-
lide with the cube.
H.3.34 TickTockPicking
Directory: demo/java3d/TickTockPicking
The TickTockPicking program displays a set of nine spinning tetrahedra and an
oscillating colored cube. Picking the cube or one of the tetrahedra with the left
mouse button causes it to change color.
H.3.35 VirtualInputDevice
Directory: demo/java3d/VirtualInputDevice
This directory contains another version of the HelloUniverse program with view-
ing position controls implemented via the InputDevice interface.
avatar
The software representation of a person as the person appears to others in a
shared virtual universe. The avatar may or may not resemble an actual person.
branch graph
A graph rooted to a BranchGroup node. See also scene graph and shared graph.
CC
Clipping coordinates.
center ear
Midpoint between left and right ears of listener.
center eye
Midpoint between left and right eyes of viewer. This is the head coordinate sys-
tem origin.
compiled
A subgraph may be compiled by an application using the compile method of the
root node—a BranchGroup or a SharedGroup—of the graph. A compiled object
is any object that is part of a compiled graph. An application can compile some
or all of the subgraphs that make up a complete scene graph. Java 3D compiles
these graphs into an internal format. Additionally, Java 3D provides restricted
access to methods of compiled objects or graphs. See also live.
compiled-retained mode
One of three modes in which Java 3D objects are rendered. In this mode, Java 3D
renders the scene graph, or a portion of the scene graph, that has been previously
compiled into an internal format. See also retained mode, immediate mode.
content branch
A branch graph that contains content-related leaf nodes, such as Shape3D nodes.
No viewing-specific nodes are contained in a content branch.
DAG
Directed acyclic graph. A scene graph.
EC
Eye coordinates.
frustum
See view frustum.
group node
A node within a scene graph that composes, transforms, selects, and in general
modifies its descendant nodes. See also leaf node, root node.
HMD
Head-mounted display.
image plate
The display area; the viewing screen or head-mounted display.
immediate mode
One of three modes in which Java 3D objects are rendered. In this mode objects
are rendered directly, under user control, rather than as part of a scene graph tra-
versal. See also retained mode, compiled-retained mode.
IID
Interaural intensity difference. The difference between the perceived amplitude
(gain) of the signal from a source as it reaches the listener’s left and right ears.
ITD
Interaural time difference. The difference in time in the arrival of the signal from
a sound source as it reaches the listener’s left and right ears.
leaf node
A node within a scene graph that contains the visual, auditory, and behavioral
components of the scene. See also group node, root node.
LOD
Level of detail. A predefined Behavior that operates on a Switch node to select
from among multiple versions of an object or collection of objects.
polytope
A bounding volume defined by a closed intersection of half-spaces.
retained mode
One of three modes in which Java 3D objects are rendered. In this mode, Java 3D
traverses the scene graph and renders the objects that are in the graph. See also
compiled-retained mode, immediate mode.
root node
A node within a scene graph that establishes the default environment. See also
group node, leaf node.
scene graph
A collection of branch graphs rooted to a Locale. A virtual universe has one or
more scene graphs. See also branch graph and shared graph.
shared graph
A graph rooted to a SharedGroup node. See also branch graph and scene graph.
stride
The part of an interleaved array that defines the length of a vertex.
three space
Three-dimensional space.
view branch
A branch graph that contains a ViewPlatform leaf node and may contain other
content-related leaf nodes for geometry associated with a viewer.
view frustum
A truncated, pyramid-shaped viewing area that defines how much of the world
the viewer sees. Objects not within the view frustum are not visible. Objects that
intersect the boundaries of the viewing frustum are clipped (partially drawn).
VPC
View platform coordinates.
ALLOW_ALPHA_TEST_VALUE_ ALLOW_ATTENUATION_WRITE
READ flag, 131 flag, 75
ALLOW_ALPHA_TEST_VALUE_ ALLOW_ATTRIBUTE_GAIN_READ
WRITE flag, 131 flag, 154
ALLOW_ANGULAR_ ALLOW_ATTRIBUTE_GAIN_WRITE
ATTENUATION_READ flag, 89 flag, 154
ALLOW_ANGULAR_ ALLOW_ATTRIBUTES_READ
ATTENUATION_WRITE flag, 89 flag, 96
ALLOW_ANTIALIASING_READ flag ALLOW_ATTRIBUTES_WRITE
LineAttributes, 125 flag, 96
PointAttributes, 128 ALLOW_AUTO_COMPUTE_
ALLOW_ANTIALIASING_WRITE flag BOUNDS_READ flag, 23
LineAttributes, 125 ALLOW_AUTO_COMPUTE_
BOUNDS_WRITE flag, 23
PointAttributes, 128
ALLOW_AXIS_READ flag, 56
ALLOW_APPEARANCE_OVERRIDE_
READ flag ALLOW_AXIS_WRITE flag, 56
Morph, 100 ALLOW_BACK_DISTANCE_READ
flag, 61
Shape3D, 53
ALLOW_BACK_DISTANCE_WRITE
ALLOW_APPEARANCE_OVERRIDE_ flag, 61
WRITE flag
ALLOW_BLEND_COLOR_READ
Morph, 100 flag, 134
Shape3D, 53 ALLOW_BLEND_COLOR_WRITE
ALLOW_APPEARANCE_READ flag flag, 134
AlternateAppearance, 102 ALLOW_BLEND_FUNCTION_READ
Morph, 99 flag, 136
Shape3D, 53 ALLOW_BLEND_FUNCTION_WRITE
ALLOW_APPEARANCE_WRITE flag flag, 136
AlternateAppearance, 102 ALLOW_BOUNDARY_COLOR_READ
Morph, 99 flag, 141
Shape3D, 53 ALLOW_BOUNDARY_MODE_READ
flag, 141
ALLOW_APPLICATION_BOUNDS_
READ flag ALLOW_BOUNDING_BOX_READ
flag, 229
Background, 59
ALLOW_BOUNDS_READ flag, 23
Clip, 61
ALLOW_BOUNDS_WRITE flag, 23
Soundscape, 96
ALLOW_CACHE_READ flag, 151
ALLOW_APPLICATION_BOUNDS_
WRITE flag ALLOW_CACHE_WRITE flag, 151
Background, 59 ALLOW_CHANNELS_USED_READ
flag, 78
Clip, 61
ALLOW_CHARACTER_SPACING_
Soundscape, 96
READ flag, 229
ALLOW_ATTENUATION_READ
ALLOW_CHARACTER_SPACING_
flag, 75
WRITE flag, 229
ALLOW_CHILDREN_EXTEND ALLOW_COMPONENT_READ
flag, 42 flag, 138
ALLOW_CHILDREN_READ flag, 42 ALLOW_COMPONENT_WRITE
ALLOW_CHILDREN_WRITE flag, 42 flag, 138
ALLOW_COLLIDABLE_READ ALLOW_CONCENTRATION_READ
flag, 24 flag, 76
ALLOW_COLLIDABLE_WRITE ALLOW_CONCENTRATION_WRITE
flag, 24 flag, 76
ALLOW_COLLISION_BOUNDS_ ALLOW_CONT_PLAY_READ flag, 78
READ flag ALLOW_CONT_PLAY_WRITE
Group, 42 flag, 78
Morph, 99 ALLOW_COORDINATE_INDEX_
Shape3D, 53 READ flag, 211
ALLOW_COLLISION_BOUNDS_ ALLOW_COORDINATE_INDEX_
WRITE flag WRITE flag, 211
Group, 42 ALLOW_COORDINATE_READ
flag, 192
Morph, 99
ALLOW_COORDINATE_WRITE
Shape3D, 53 flag, 192
ALLOW_COLOR_INDEX_READ ALLOW_COUNT_READ flag
flag, 211
CompressedGeometry, 220
ALLOW_COLOR_INDEX_WRITE
flag, 211 GeometryArray, 193
ALLOW_COLOR_READ flag ALLOW_COUNT_WRITE flag, 193
Background, 59 ALLOW_CULL_FACE_READ
flag, 129
ColoringAttributes, 124
ALLOW_CULL_FACE_WRITE
Fog, 67 flag, 129
GeometryArray, 192 ALLOW_DATA_READ flag, 166
Light, 71 ALLOW_DENSITY_READ flag, 69
ALLOW_COLOR_TABLE_READ ALLOW_DENSITY_WRITE flag, 69
flag, 134
ALLOW_DEPTH_COMPONENT_
ALLOW_COLOR_TABLE_WRITE READ flag, 224
flag, 134
ALLOW_DEPTH_COMPONENT_
ALLOW_COLOR_WRITE flag WRITE flag, 224
Background, 59 ALLOW_DEPTH_ENABLE_READ
ColoringAttributes, 124 flag, 131
Fog, 67 ALLOW_DETACH flag, 44
GeometryArray, 192 ALLOW_DIRECTION_READ flag
Light, 71 ConeSound, 89
ALLOW_COLORING_ATTRIBUTES_ DirectionalLight, 74
READ flag, 120 SpotLight, 76
ALLOW_COLORING_ATTRIBUTES_ ALLOW_DIRECTION_WRITE flag
WRITE flag, 120
ConeSound, 89
DirectionalLight, 74
Version 1.2, March 2000 613
INDEX
x flag (Continued)
Tuple2f, 370
Z
z flag
Tuple3b, 376
AxisAngle4d, 416
Tuple3d, 378
AxisAngle4f, 418
Tuple3f, 384
Tuple3b, 376
Tuple3i, 390
Tuple3d, 378
Tuple4b, 394
Tuple3f, 384
Tuple4d, 396
Tuple3i, 390
Tuple4f, 404
Tuple4b, 394
Tuple4i, 412
Tuple4d, 396
Tuple4f, 404
Tuple4i, 412
Y ZERO flag, 177
y flag zero method, 421
AxisAngle4d, 416
AxisAngle4f, 418
Tuple2d, 365
Tuple2f, 370
Tuple3b, 376
Tuple3d, 378
Tuple3f, 384
Tuple3i, 390
Tuple4b, 394
Tuple4d, 396
Tuple4f, 404
Tuple4i, 412