0% found this document useful (0 votes)
13 views26 pages

CG Module2 On

Module 2 covers the essentials of input and interaction in computer graphics, emphasizing the importance of user interaction through input devices like mice, keyboards, and joysticks. It discusses the limitations of OpenGL regarding direct interaction support and introduces the GLUT toolkit as a solution for managing window and input functions. The module also outlines various input modes, including request, sample, and event modes, and explores the client-server model in the context of distributed computing for graphics applications.

Uploaded by

anjalianju200115
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views26 pages

CG Module2 On

Module 2 covers the essentials of input and interaction in computer graphics, emphasizing the importance of user interaction through input devices like mice, keyboards, and joysticks. It discusses the limitations of OpenGL regarding direct interaction support and introduces the GLUT toolkit as a solution for managing window and input functions. The module also outlines various input modes, including request, sample, and event modes, and explores the client-server model in the context of distributed computing for graphics applications.

Uploaded by

anjalianju200115
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Input and Interactions Module 2

Module 2
Syllabus: Input and Interaction: Interaction, Input devices, Clients and Servers, Display Lists, Display Lists
and Modeling, Programming Event Driven Input, Menus.

2.1 INTERACTION

One of the most important advances in computer technology was enabling users to interact with computer
displays. More than any Sketchpad project launched the present era of interactive
computer graphics. The basic paradigm that he introduced is deceptively simple. The user sees an image on the
display. She reacts to this image by means of an interactive device, such as a mouse. The image changes in
response to her input. She reacts to this change, and so on. Whether we are writing programs using the tools
available in a modern window system or using the human computer interface in an interactive museum exhibit,
we are making use of this paradigm.

Although rendering is the prime concern of most modern APIs, including OpenGL, interactivity is an important
component of most applications. OpenGL, however, does not support interaction directly. The major reason for
this omission is that the system architects who designed OpenGL wanted to increase its portability by allowing
the system to work in a variety of environments. Consequently, window and input functions were left out of the
API. Although this decision makes renderers portable, it makes discussions of interaction that do not include
specifics of the window system more difficult. In addition, because any application program must have at least a
minimal interface to the window environment, we cannot avoid such issues completely if we want to write
complete, nontrivial programs.

We can avoid such potential difficulties by using the GLUT toolkit. This toolkit provides the minimal
functionality that is expected on virtually all systems, such as opening of windows, use of the keyboard and
mouse, and creation of pop-up menus. We adopt this approach, even though it may not provide all the features
of any particular windowing system and produces code that neither makes use of the full capabilities of any
particular window system nor proves as efficient as code written for a particular environment. However, writing
code for the standard window systems is based on the principles that we can illustrate most simply using GLUT.

We use the term window system, to include the total environment provided by systems such as the X Window
System, Microsoft Windows, and the Macintosh Operating System. Graphics programs that we develop will
render into a window within one of these environments. The terminology used in the window system literature
may obscure the distinction between, for example, an X window and the OpenGL window into which our graphics
are rendered. However, you will usually be safe if you regard the OpenGL window as a particular type of window
on your system that can display output from OpenGL programs. Our use of the GLUT toolkit will enable us to

Dept. of CSE, ATMECE, Mysuru Page No. 1


Input and Interactions Module 2

avoid the complexities inherent in the interactions among the window system, the window manager, and the
graphics system.

2.2 INPUT DEVICES

2.2.1 Physical Input Devices

The pointing device allows the user to indicate a position on a display and almost always incorporates one or
more buttons to allow the user to send signals or interrupts to the computer. The keyboard device is almost always
a physical keyboard but can be generalized to include any device that returns character codes. For example, a
tablet PC uses recognition software to decode the racter
codes identical to those of the standard keyboard.

We will use the American Standard Code for Information Interchange (ASCII) in our examples. ASCII assigns a
single unsigned byte to each character. Nothing we do restricts us to this particular choice, other than that ASCII
is the prevailing code used. Note, however, that other codes, especially those used for Internet applications, use
multiple bytes for each character, thus allowing for a much richer set of supported characters.

The mouse (Figure 3.1) and trackball (Figure 3.2) are similar in use and often in construction as well. When
turned over, a typical mechanical mouse looks like a trackball. In both devices, the motion of the ball is converted
to signals sent back to the computer by pairs of encoders inside the device that are turned by the motion of the
ball. The encoders measure motion in two orthogonal directions.

There are many variants of these devices. Some use optical detectors rather than mechanical detectors to measure
motion. Small trackballs are popular with portable computers because they can be incorporated directly into the
keyboard. There are also various pressure-sensitive devices used in keyboards that perform similar functions to
the mouse and trackball but that do not move; their encoders measure the pressure exerted on a small knob that
often is located between two keys in the middle of the keyboard.

Dept. of CSE, ATMECE, Mysuru Page No. 2


Input and Interactions Module 2

A typical data tablet (Figure 3.4) has rows and columns of wires embedded under its surface. The position of the
stylus is determined through electromagnetic interactions between signals traveling through the wires and sensors
in the stylus. Touch-sensitive transparent screens that can be placed over the face of a CRT have many of the
same properties as the data tablet. Small, rectangular, pressure-sensitive touchpads are embedded in the keyboards
of most portable computers. These touchpads can be configured as either relative- or absolute positioning devices.
Some are capable of detecting simultaneous input from two fingers touching different spots on the pad and can
use this information to enable more complex behaviors.

The lightpen has a long history in computer graphics. It was the device used in S
The lightpen contains a light-sensing device, such as a photocell (Figure 3.5). If the lightpen is positioned on the
face of the CRT at a location opposite where the electron beam strikes the phosphor, the light emitted exceeds a
threshold in the photodetector and a signal is sent to the computer. The light pen was originally used on random
scan devices so the time of the interrupt could easily be matched to a piece of code in the display list, thus making
the light pen ideal for selecting application-defined objects. With raster scan devices, the position on the display
can be determined by the time the scan begins and the time it takes to scan each line. Hence, we have a direct-
positioning device. The lightpen is not as popular as the mouse, data tablet, and trackball. One of its major
deficiencies is that it has difficulty obtaining a position that corresponds to a dark area of the screen. However,
tablet PCs are used in a manner that mimics how the light pen was used originally; the user has a stylus with
which she can move randomly about the tablet (display) surface.

Dept. of CSE, ATMECE, Mysuru Page No. 3


Input and Interactions Module 2

One other device, the joystick (Figure 3.6), is particularly worthy of mention. The motion of the stick in two
orthogonal directions is encoded, interpreted as two velocities, and integrated to identify a screen location. The
integration implies that if the stick is left in its resting position, there is no change in the cursor position and that
the farther the stick is moved from its resting position, the faster the screen location changes. Thus, the joystick
is a variable-sensitivity device. The other advantage of the joystick is that the device can be constructed with
mechanical elements, such as springs and dampers, that give resistance to a user who is pushing the stick. Such
a mechanical feel, which is not possible with the other devices, makes the joystick well suited for applications
such as flight simulators and game controllers.

For three-dimensional graphics, we might prefer to use three-dimensional input devices. Although various such
devices are available, none have yet won the widespread acceptance of the popular two-dimensional input
devices. A spaceball looks like a joystick with a ball on the end of the stick (Figure 3.7); however, the stick does
not move. Rather, pressure sensors in the ball measure the forces applied by the user. The spaceball can measure
not only the three direct forces (up down, front back, left right) but also three independent twists. The device
measures six independent values and thus has six degrees of freedom. Such an input device could be used, for
example, both to position and to orient a camera. Other three-dimensional devices, such as laser scanners, measure
three dimensional positions directly. Numerous tracking systems used in virtual reality applications sense the
position of the user. Virtual reality and robotics applications often need more degrees of freedom than the two to
six provided by the devices that we have described. Devices such as data gloves can sense motion of various parts
of the human body, thus providing many additional input signals. Recently, in addition to being wireless, input
devices such as Ninten sensing of position and orientation.

Dept. of CSE, ATMECE, Mysuru Page No. 4


Input and Interactions Module 2

2.2.2 Logical Devices

Some earlier APIs defined six classes of logical input devices. Because input in a modern window system cannot
always be disassociated completely from the properties of the physical devices, OpenGL does not take this
approach. Nevertheless, we describe the six classes briefly because they illustrate the variety of input forms
available to a developer of graphical applications. We will see how OpenGL can provide the functionality of each
of these classes.

1. String: A string device is a logical device that provides ASCII strings to the user program. This logical
device is usually implemented by means of a physical keyboard. In this case, the terminology is consistent
with that used in most window systems and OpenGL, which usually do not distinguish between the logical
string device and a physical keyboard.
2. Locator: A locator device provides a position in world coordinates to the user program. It is usually
implemented by means of a pointing device, such as a mouse or a trackball. In OpenGL, we usually use
the pointing device in this manner, although we have to do the conversion from screen coordinates to
world coordinates within our own programs.
3. Pick: A pick device returns the identifier of an object on the display to the user program. It is usually
implemented with the same physical device as a locator, but has a separate software interface to the user
program. In OpenGL, we can use a process called selection (Section 3.8) to accomplish picking.
4. Choice: Choice devices allow the user to select one of a discrete number of options. In OpenGL, we can
use various widgets provided by the window system. A widget is a graphical interactive device, provided
by either the window system or a toolkit. Typical widgets include menus, scrollbars, and graphical
buttons. Most widgets are implemented as special types of windows. For example, a menu with n
selections acts as a choice device, allowing us to select one of n alternatives. Widget sets are the key
element defining a graphical user interface, or GUI.
5. Valuators: Valuators provide analog input to the user program. On some graphics systems, there are
boxes or dials to provide valuator input. Here again, widgets within various toolkits usually provide this
facility through graphical devices such as slide bars and radio boxes.
6. Stroke: A stroke device returns an array of locations. Although we can think of a stroke device as similar
to multiple uses of a locator, it is often implemented such that an action, say, pushing down a mouse
button, starts the transfer of data into the specified array, and a second action, such as releasing the button,
ends this transfer.

Dept. of CSE, ATMECE, Mysuru Page No. 5


Input and Interactions Module 2

2.2.3 Input Modes

The manner by which input devices provide input to an application program can be described in terms of two
entities: a measure process and a device trigger. The measure of a device is what the device returns to the user
program. The trigger of a device is a physical input on the device with which the user can signal the computer.
For example, the measure of a keyboard should include a single character or a string of characters, and the trigger
can be the Return or Enter key.

The application program can obtain the measure of a device in three distinct modes. Each mode is defined by the
relationship between the measure process and the trigger. Once a measure process is started, the measure is taken
and placed in a buffer, even though the contents of the buffer may not yet be available to the program. For
example, the position of a mouse is tracked continuously by the underlying window system and a cursor is
displayed regardless of whether the application program needs mouse input.

In request mode, the measure of the device is not returned to the program until the device is triggered. This input
mode is standard in non graphical applications. For example, if a typical C program requires character input, we
use a function such as scanf. When the program needs the input, it halts when it encounters the scanf statement
and waits while we type characters at our terminal. We can backspace to correct our typing, and we can take as
long as we like. The data are placed in a keyboard buffer whose contents are returned to our program only after
a particular key, such as the Enter key (the trigger), is pressed. For a logical device, such as a locator, we can
move our pointing device to the desired location and then trigger the device with its button; the trigger will cause
the location to be returned to the application program. The relationship between measure and trigger for request
mode is shown in Figure 3.8.

Sample-mode input is immediate. As soon as the function call in the user program is encountered, the measure is
returned. Hence, no trigger is needed (Figure 3.9). In sample mode, the user must have positioned the pointing
device or entered data using the keyboard before the function call, because the measure is extracted immediately
from the buffer.

Dept. of CSE, ATMECE, Mysuru Page No. 6


Input and Interactions Module 2

One characteristic of both request- and sample-mode input in APIs that support them is that the user must identify
which device is to provide the input. Consequently, we ignore any other information that becomes available from
any input device other than the one specified. Both request and sample modes are useful for situations where the
program guides the user but are not useful in applications where the user controls the flow of the program. For
example, a flight simulator or computer game might have multiple input devices such as a joystick, dials, buttons,
and switches most of which can be used at any time. Writing programs to control the simulator with only sample-
and request-mode input is nearly impossible because we do not know what devices the pilot will use at any point
in the simulation. More generally, sample- and request-mode input are not sufficient for handling the variety of
possible human computer interactions that arise in a modern computing environment.

Our third mode, eventmode, can handle these other interactions. We introduce it in three steps. First, we show
how event mode can be described as another mode within our measure trigger paradigm. Second, we discuss the
basics of client and servers where event mode is the preferred interaction mode. Third, we show an eventmode
interface to OpenGL using GLUT, and we write demonstration programs using this interface.

Suppose that we are in an environment with multiple input devices, each with its own trigger and each running a
measure process. Each time that a device is triggered, an event is generated. The device measure, including the
identifier for the device, is placed in an event queue. This process of placing events in the event queue is
completely independent of what the application program does with these events. One way that the application
program can work with events is shown in Figure 3.10. The application program can examine the front event in
the queue or, if the queue is empty, can wait for an event to occur. If there is an event in the queue, the program
do. If, for example, the first event is from the keyboard
but the application program is not interested in keyboard input, the event can be discarded and the next event in
the queue can be examined.

2.3 CLIENTS AND SERVERS

We have looked at our graphics system as a monolithic box with limited connections to the outside world, rather
than through our carefully controlled input devices and a display. Networks and multiuser computing have
changed this picture dramatically, and to such an extent that, even if we had a single-user isolated system, its
software probably would be configured as a simple client server network.

Dept. of CSE, ATMECE, Mysuru Page No. 7


Input and Interactions Module 2

If computer graphics is to be useful for a variety of real applications, it must function well in a world of distributed
computing and networks. In this world, our building blocks are entities called servers that can perform tasks for
clients. Clients and servers can be distributed over a network (Figure 3.11) or contained entirely within a single
computational unit. Familiar examples of servers include print servers, which can allow sharing of a high-speed
printer among users; compute servers, such as remotely located high-performance computers, accessible from
user programs; file servers that allow users to share files and programs, regardless of the machine they are logged
into; and terminal servers that handle dial-in access. Users and user programs that make use of these services are
clients or client programs. Servers can also exist at a lower level of granularity within a single operating system.
For example, the operating system might provide a clock service that multiple client programs can use. It is less
obvious what we should call a workstation connected to the network: It can be both a client and a server, or
perhaps more to the point, a workstation may run client programs and server programs concurrently.

The model that we use here was popularized by the X Window System. We use much of that system
terminology, which is now common to most window systems and fits well with graphical applications.

A workstation with a raster display, a keyboard, and a pointing device, such as a mouse, is a graphics server. The
server can provide output services on its display and input services through the keyboard and pointing device.
These services are potentially available to clients anywhere on the network.

Dept. of CSE, ATMECE, Mysuru Page No. 8


Input and Interactions Module 2

2.4 DISPLAY LISTS

Display lists illustrate how we can use clients and servers on a network to improve interactive graphics
performance. Display lists have their origins in the early days of computer graphics.

The original architecture of a graphic system was based on a general-purpose computer (or host) connected to a
display (Figure 3.12). The computer would send out the necessary information to redraw the display at a rate
sufficient to avoid noticeable flicker. At that time (circa 1960), computers were slow and expensive, so the cost
of keeping even a simple display refreshed was prohibitive for all but a few applications.

The solution to this problem was to build a special-purpose computer, called a display processor, with an
organization like that illustrated in Figure 3.13. The display processor had a limited instruction set, most of which
was oriented toward drawing primitives on the display. The user program was processed in the host computer,
resulting in a compiled list of instructions that was then sent to the display processor, where the instructions were
stored in a display memory as a display file, or display list. For a simple non-interactive application, once the
display list was sent to the display processor, the host was free for other tasks, and the display processor would
execute its display list repeatedly at a rate sufficient to avoid flicker. In addition to resolving the bottleneck due
to burdening the host, the display processor introduced the advantages of special-purpose rendering hardware.

Today, the display processor of old has become a graphics server, and the application program on the host
computer has become a client. The major bottleneck is no longer the rate at which we have to refresh the display

Dept. of CSE, ATMECE, Mysuru Page No. 9


Input and Interactions Module 2

(although that is still a significant problem), but rather the amount of traffic that passes between the client and
server. In addition, the use of special-purpose hardware now characterizes high-end systems.

We can send graphical entities to a display in one of two ways. We can send the complete description of our
objects to the graphics server. For our typical geometric primitives, this transfer entails sending vertices,
attributes, and primitive types, in addition to viewing information. In our fundamental mode of operation,
immediate mode, as soon as the program executes a statement that defines a primitive, that primitive is sent to
the server for possible display and no memory of it is retained in the system.2 To redisplay the primitive after a
clearing of the screen, or in a new position after an interaction, the program must re-specify the primitive and
then must resend the information through the display process. For complex objects in highly interactive
applications, this process can cause a considerable quantity of data to pass from the client to the server.

Display lists offer an alternative to this method of operation. This second method is called retained-mode
graphics. We define the object once, and then put its description in a display list. The display list is stored in the
server and redisplayed by a simple function call issued from the client to the server. In addition to conferring the
obvious advantage of reduced network traffic, this model allows much of the overhead in executing commands
to be done once and have the results stored in the display list on the graphics server.

There are, of course, a few disadvantages to the use of display lists. Display lists require memory on the server,
and there is the overhead of creating a display list. Although this overhead is often offset by the efficiency of the
execution of the display list, it might not be if the data are changing.

2.4.1 Definition and Execution of Display Lists

Display lists have much in common with ordinary files. There must be a mechanism to define (create) and
manipulate (place information in) them. The definition of which contents of a display list are permissible should
be flexible enough to allow considerable freedom to the user. OpenGL has a small set of functions to manipulate
display lists and places only a few restrictions on display list contents. We develop several simple examples to
show the fun

Display lists are defined similarly to geometric primitives. There is a glNewList at the beginning and a glEndList
at the end, with the contents in between. Each display list must have a unique identifier an integer that is usually
macro defined in the C program by means of a #define directive to an appropriate name for the object in the list.
For example, the following code defines a red box.

Dept. of CSE, ATMECE, Mysuru Page No. 10


Input and Interactions Module 2

The flag GL_COMPILE tells the system to send the list to the server but not to display its contents. If we want
an immediate display of the contents while the list is being constructed, we can use the
GL_COMPILE_AND_EXECUTE flag instead. Each time that we wish to draw the box on the server, we execute
the function as follows: glCallList(BOX);

Just as it does with other OpenGL functions, the current state determines which transformations are applied to
the primitives in the display list. Thus, if we change the model-view or projection matrices between executions
of the display list, the box will appear in different places or will no longer appear, as the following code fragment
demonstrates:

A stack is a data structure in which the item placed most recently in the structure is the first removed. We can
save the present values of attributes and matrices by placing, or pushing them on the top of the appropriate stack;
we can recover them later by removing, or popping them from the stack. A standard and safe procedure is always
to push both the attributes and matrices on their own stacks when we enter a display list, and to pop them when
we exit. Thus, we usually see the function calls

glPushAttrib(GL_ALL_ATTRIB_BITS);
glPushMatrix();
at the beginning of a display list and

glPopAttrib();
glPopMatrix();
at the end.

Dept. of CSE, ATMECE, Mysuru Page No. 11


Input and Interactions Module 2

2.4.2 Text and Display Lists

Earlier, we introduced both stroke and raster text. Regardless of which type we choose to use, we need a
reasonable amount of code to describe a set of characters. For example, suppose that we use a raster font in which
each character is stored as an 8 × 13 pattern of bits. It takes 13 bytes to store each character. If we want to display
a string by the most straightforward method, we can send each character to the server each time that we want it
displayed. This transfer requires the movement of at least 13 bytes per character. If we define a stroke font using
only line segments, each character can require a different number of lines. If we use filled polygons for characters,
as mple to define, but we may need many line segments to get
more than 13 bytes per character to represent a stroke
font. For applications that display large quantities of text, sending each character to the display every time that it
is needed can place a significant burden on our graphics systems.

A more efficient strategy is to define the font once, using a display list for each character, and then to store the
font on the server using these display lists. This solution is similar to what is done for bitmap fonts on standard
alphanumeric display terminals. The patterns are stored in read-only memory (ROM) in the terminal, and each
character is selected and displayed based on a single byte: its ASCII code. The difference here is one of both
quantity and quality. We can define as many fonts as our display memory can hold, and we can treat stroke fonts
like other graphical objects, allowing us to translate, scale, and rotate them as desired.

The basics of defining and displaying a character string (1 byte per character) using a stroke font and display lists
provide a simple but important example of the use of display lists in OpenGL. The procedure is essentially the
same for a raster font. We can define either the standard 96 printable ASCII characters or we can define patterns
for a 256-character extended ASCII character set.

First, we define a function OurFont(char c), which will draw any ASCII character c that can appear in our string.
The function might have a form like the following:

Dept. of CSE, ATMECE, Mysuru Page No. 12


Input and Interactions Module 2

Within each case, we have to be careful about the spacing; each character in the string must be displayed to the
right. We can use the translate function to get the desired spacing or shift the vertex positions. Suppose that we
and we wish it to fit in a unit square. The corresponding part of OurFont might be as
follows:

This code approximates the circle with 12 quadrilaterals. Each will be filled according to the current state.
Although we do not discuss the full power of transformations until Chapter 4, here we explain the use of the
translation function in this code.

We are working with two-dimensional characters. Hence, each character is defined in the plane z = 0, and we can
use whatever coordinate system we wish to define our characters. We assume that each character fits inside a
box.3 The usual strategy is to start at the lower-left corner of the first character in the string and to draw one
character at a time, drawing each character such that we end at the lower-right corner
which is the lower-

The first translation moves us to the center of we set to be a unit square. We then
define our vertices using two concentric circles centered at this point (Figure 3.15). One way to envision the
translation function is to say that it shifts the origin for all the drawing commands that follow. After the 12
quadrilaterals in the strip are defined, we move to the lower-right corner of the box.

Dept. of CSE, ATMECE, Mysuru Page No. 13


Input and Interactions Module 2

The two translations accumulate; as a result of these translations, we are in the proper position to start the next
character. Note that, in this example, we do not want to push and pop the matrices. Other characters can be defined
in a similar manner. Although our code is inelegant, its efficiency is of little consequence because the characters
are generated only once and then are sent to the graphics server as a compiled display list.

Suppose that we want to generate a 256-character set. The required code, using the OurFont function, is as
follows:

When we wish to use these display lists to draw individual characters, rather than offsetting the identifier of the
display lists by base each time, we can set an offset as follows:

glListBase(base);

Finally, our drawing of a string is accomplished in the server by the function call

char *text_string;
glCallLists( (GLint) strlen(text_string), GL_BYTE, text_string);
which makes use of the standard C library function strlen to find the length of input string text_string. The first
argument in the function glCallLists is the number of lists to be executed. The third is a pointer to an array of a
type given by the second argument. The identifier of the kth display list executed is the sum of the list base
(established by glListBase) and the value of the kth character in the array of characters.

Dept. of CSE, ATMECE, Mysuru Page No. 14


Input and Interactions Module 2

2.4.3 Fonts in GLUT

GLUT provides a few raster and stroke fonts. They do not make use of display lists; in the final example in this
chapter, however, we create display lists to contain one of these GLUT fonts. We can access a single character
from a monotype, or evenly spaced, font by the following function call:

glutStrokeCharacter(GLUT_STROKE_MONO_ROMAN, int character)

GLUT_STROKE_ROMAN provides proportionally spaced characters. You should use these fonts with caution.
Their size (approximately 120 units maximum) may have little to do with the units of the rest of your program;
thus, they may have to be scaled. We usually control the position of a character by using a translation before the
character function is called. In addition, each invocation of glutStrokeCharacter includes a translation to the
bottom right of for the next character. Scaling and translation affect the OpenGL
state, so here we should be careful to use glPushMatrix and glPopMatrix as necessary to prevent undesirable
positioning of objects defined later in the program. Our discussion of transformations in Chapter 4 should enable
you to use stroke fonts effectively.

Raster and bitmap characters are produced in a similar manner. For example, a single 8 × 13 character is obtained
using the following:

glutBitmapCharacter(GLUT_BITMAP_8_BY_13, int character)

Positioning of bitmap characters is considerably simpler than the positioning of stroke characters is because
bitmap characters are drawn directly in the frame buffer and are not subject to geometric transformations, whereas
stroke characters are. OpenGL keeps, within its state, a raster position. This position identifies where the next
raster primitive will be placed; it can be set using the glRasterPos*() function. The user program typically moves
the raster position to the desired location before the first character in a string defined by glutBitmapCharacter is
invoked.

2.5 DISPLAY LISTS AND MODELING

Looking at the spectrum of applications of interactive computer graphics, we can observe that a large fraction of
them involve some sort of interactive modeling. For example, CAD applications allow the user to interactively
design buildings, model circuits, and build computer animations. From the perspective of computer graphics, we
must design the user interfaces for such applications and will do so over the next few sections. But underlying
these applications as with most computer applications are both conceptual and implemented models employing
a variety of data structures that are designed to support efficient interactions.

Dept. of CSE, ATMECE, Mysuru Page No. 15


Input and Interactions Module 2

Because display lists can call other display lists, they are powerful tools for building hierarchical models that can
incorporate relationships among parts of a model. Consider a simple face modeling system that can produce
images such as those shown in Figure 3.16 that might be used for animation. Each face has two identical eyes
and two identical ears plus the outline, a nose, and a mouth. We could specify the constituent parts through display
lists, such as the following:

The code for a face would then use transformations (Chapter 5) to bring each component into its desired location
as follows:

There are some significant advantages to this approach. First, we can make use of the fact that there are multiple
instances of the same component by calling the same display list multiple times. Second, if we want to create a
different character, we can change one or more of the display lists for the constituent parts but we can leave the
display list for the face unchanged because the structure of a face as described in it remains unchanged.

Dept. of CSE, ATMECE, Mysuru Page No. 16


Input and Interactions Module 2

2.6 PROGRAMMING EVENT-DRIVEN INPUT

2.6.1 Using the Pointing Device

Two types of events are associated with the pointing device, which is conventionally assumed to be a mouse but
could be a trackball or a data tablet.

A move event is generated when the mouse is moved with one of the buttons pressed. If the mouse is moved
without a button being held down, this event is called a passive move event. After a move event, the position of
the mouse its measure is made available to the application program.

A mouse event occurs when one of the mouse buttons is either pressed or released. A button being held down
does not generate a mouse event until the button is released. The information returned includes the button that
generated the event, the state of the button after the event (up or down), and the position of the cursor tracking
the mouse in window coordinates (with the origin in the upper-left corner of the window).We register the mouse
callback function, usually in the main function, by means of the GLUT function as follows:

glutMouseFunc(myMouse);

The mouse callback must have the form

void myMouse(int button, int state, int x, int y)

and is written by the application programmer. Within the callback function, we define the actions that we want
to take place if the specified event occurs. There may be multiple actions defined in the mouse callback function
corresponding to the many possible button and state combinations. For our simple example, we want the pressing
of the left mouse button to terminate the program. The required callback is the following single-line function:

If any other mouse event such as the pressing of one of the other buttons occurs, no response action will occur,
because no action corresponding to these events has been defined in the callback function.

First, we look at the main program, which is much the same as our previous examples.

Dept. of CSE, ATMECE, Mysuru Page No. 17


Input and Interactions Module 2

The reshape event is generated whenever the window is resized, such as by a user interaction; we discuss it next.
We do not need the required display callback for drawing in this example because the only time that primitives
will be generated is when a mouse event occurs. Because GLUT requires that all programs have a display
callback, we must include this callback, although it can have a simple body:

The mouse callbacks are again in the function myMouse.

We need three global variables. The size of the window may change dynamically, and its present size should be
available, both to the reshape callback and to the drawing function drawSquare. If we want to change the size of
the squares we draw, we may find it beneficial to make the square-size parameter global as well. Our initialization
routine selects a clipping window that is the same size as the window created in main and specifies a viewport to
correspond to the entire window. This window is cleared to black. Note that we could omit the setting of the
window and viewport here because we are merely setting them to the default values.

Dept. of CSE, ATMECE, Mysuru Page No. 18


Input and Interactions Module 2

Our square-drawing routine has to take into account that the position returned from the mouse event is in the
te system, which has its origin at the top left of the window. Hence, we have to flip
the y value returned, using the present height of the window (the global wh) as follows:

2.6.2 Window Events

Most window systems allow a user to resize the window interactively, usually by using the mouse to drag a corner
of the window to a new location. This event is an example of a window event.

In our square-drawing example, we ensure that squares of the same size are drawn, regardless of the size or shape
of the window. We clear the screen each time it is resized, and we use the entire new window as our drawing
area. The reshape event returns in its measure the height and width of the new window. We use these values to
create a new OpenGL clipping window using gluOrtho2D, as well as a new viewport with the same aspect ratio.
We then clear the window to black. Thus, we have the following callback:

Dept. of CSE, ATMECE, Mysuru Page No. 19


Input and Interactions Module 2

There are other possibilities here. We could change the size of the squares to match the increase or decrease of
the window size. We have not considered other events, such as a window movement without resizing, an event
that can be generated by a user who drags the window to a new location. And we have not specified what to do
if the window is hidden behind another window and then is exposed (or brought to the front) by the user. There
are callbacks for these events, and we can write simple functions similar to myReshape for them or we can rely
on the default behaviour of GLUT. Another simple change that we can make to our program is to have new
squares generated as long as one of the mouse buttons is held down. The relevant callback is the motion callback,
which we set through the following function:

glutMotionFunc(drawSquare);

2.6.3 Keyboard Events

We can also use the keyboard as an input device. Keyboard events are generated when the mouse is in the window
and one of the keys is pressed or released. The GLUT function glutKeyboardFunc is the callback for events
generated by pressing a key, whereas glutKeyboardUpFunc is the callback for events generated by releasing a
key.

When a keyboard event occurs, the ASCII code for the key that generated the event and the location of the mouse
are returned. All the keyboard callbacks are registered in a single callback function, such as the following:

glutKeyboardFunc(myKey);

For example, if we wish to use the keyboard only to exit the program, we can use the following callback function:

Dept. of CSE, ATMECE, Mysuru Page No. 20


Input and Interactions Module 2

2.6.4 The Display and Idle Callbacks

This callback is specified in GLUT by the following function call:

glutDisplayFunc(myDisplay);

It is invoked when GLUT determines that the window should be redisplayed. One such situation occurs when the
window is opened initially; another happens after a resize event. Because we know that a display event will be
generated when the window is first opened, the display callback is a good place to put the code that generates
most non-interactive output.

The display callback can be used in other contexts, such as in animations, where various values defined in the
program may change. We can also use GLUT to open multiple windows. The state includes the present window,
and we can render different objects into different windows by changing the present window. We can also iconify
a window by replacing it with a small symbol or picture. Consequently, interactive and animation programs will
contain many calls for the re-execution of the display function. Rather than call it directly, we use the GLUT
function as follows:

glutPostRedisplay();

Using this function, rather than invoking the display callback directly, avoids extra or unnecessary screen
drawings by setting a flag ins indicating that the display needs to be redrawn. At the end
of each execution of the main loop, GLUT uses this flag to determine whether the display function will be
executed. Thus, using glutPostRedisplay ensures that the display will be drawn only once each time the program
goes through the event loop.

2.6.5 Window Management

GLUT also supports multiple windows and subwindows of a given window. We can open a second top-level
windo

id=glutCreateWindow("second window");

The returned integer value allows us to select this window as the current window into which objects will be
rendered as follows: glutSetWindow(id);

Dept. of CSE, ATMECE, Mysuru Page No. 21


Input and Interactions Module 2

We can make this window have properties different from those of other windows by invoking the
glutInitDisplayMode before glutCreateWindow. Furthermore, each window can have its own set of callback
functions because callback specifications refer to the present window.

2.7 MENUS

GLUT provides one additional feature, pop-up menus, that we can use with the mouse to create sophisticated
interactive applications.

Using menus involves taking a few simple steps. We must define the actions corresponding to each entry in the
menu. We must link the menu to a particular mouse button. Finally, we must register a callback function for each
menu. We can demonstrate simple menus with the example of a pop-up menu that has three entries.

The first selection allows us to exit our program. The second and third change the size of the squares in our
drawSquare function. We name the menu callback demo_ menu. The function calls to set up the menu and to
link it to the right mouse button should be placed in our main function. They are as follows:

The function glutCreateMenu registers the callback function demo_menu.


definition is the identifier passed to the callback when the entry is selected. Hence, our callback function is as
follows:

The call to glutPostRedisplay requests a redraw through the glutDisplayFunc callback, so that the screen is drawn
again without the menu.

Dept. of CSE, ATMECE, Mysuru Page No. 22


Input and Interactions Module 2

GLUT also supports hierarchical menus, as shown in Figure 3.18. For example, suppose that we want the main
menu that we create to have only two entries. The first entry still causes the program to terminate, but now the
second causes a submenu to pop up. The submenu contains the two entries for changing the size of the square in
our square-drawing program. The following code for the menu (which is in main) should be clear:

Program to draw Square using Mouse and Keyboard Callbacks

#include <stdlib.h>
#include <GL/glut.h>
GLsizei wh = 500, ww = 500; /* initial window width and height */
GLfloat size = 3.0; /*one half of side length */

void myDisplay()
{
glClear(GL_COLOR_BUFFER_BIT);
}

void myMouse(int btn, int state, int x, int y)


{

Dept. of CSE, ATMECE, Mysuru Page No. 23


Input and Interactions Module 2

if(btn==GLUT_LEFT_BUTTON && state==GLUT_DOWN)


drawSquare(x,y);
if(btn==GLUT_RIGHT_BUTTON && state==GLUT_DOWN)
exit(0);
}

void myInit()
{
/* set initial viewing conditions */
glViewport(0,0,ww,wh);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0.0, (GLdouble) ww , 0.0, (GLdouble) wh);
glMatrixMode(GL_MODELVIEW);
glClearColor (0.0, 0.0, 0.0, 1.0);
glColor3f(1.0, 0.0, 0.0); /*red squares*/
}

void drawSquare(int x, int y)


{
y=wh-y;
glBegin(GL_POLYGON);
glVertex2f(x+size, y+size);
glVertex2f(x-size, y+size);
glVertex2f(x-size, y-size);
glVertex2f(x+size, y-size);
glEnd();
glFlush();
}

void myReshape(GLsizei w, GLsizei h)


{
/* adjust clipping box */
glMatrixMode(GL_PROJECTION);

Dept. of CSE, ATMECE, Mysuru Page No. 24


Input and Interactions Module 2

glLoadIdentity();
gluOrtho2D(0.0, (GLdouble)w, 0.0, (GLdouble)h);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
/* adjust viewport and clear */
glViewport(0,0,w,h);
/* save new window size in global variables */
ww=w;
wh=h;
}

void myKey(unsigned char key, int x, int y)


{
if(key=='q' || key == 'Q')
exit(0);
}

int main(int argc, char **argv)


{
glutInit(&argc,argv);
glutInitWindowSize(ww, wh); /* globally defined initial window size */
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutCreateWindow("square");
myInit();
glutReshapeFunc(myReshape);
glutMouseFunc(myMouse);
glutDisplayFunc(myDisplay);
glutMotionFunc(drawSquare);
glutKeyboardFunc(myKey);
glutMainLoop();
}

Dept. of CSE, ATMECE, Mysuru Page No. 25


Input and Interactions Module 2

Output:

Dept. of CSE, ATMECE, Mysuru Page No. 26

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy