CO3 and CO4
CO3 and CO4
Session Objective
• By the end of the session the students / audience will able to understand what is
Spatial data & Visualization approaches to represent it.
2
RECAP…
Non-spatial Data: Data that relate to a specific, precisely defined location. The data are often
statistical but may be text, images or multi-media.
These are linked in the GIS to spatial data that define the location.
3
Spatial Data:
Data that define a location. These are in the form of graphic primitives that are usually either
points, lines, polygons or pixels.
4
Spatial Data:
Spatial data has spatial information in the form of say latitude and longitude. So you can locate
the information easily.
5
The fundamental differences between Spatial and Non-spatial data is:
6
Why do we care about location?
A non-spatial model cannot accurately reflect the processes and interactions happening in our
world.
7
Spatial data model:
The spatial data model consists of 2 parts:
- geometry
- properties
{
"type": "Feature",
"geometry": { "type": "Point",
"coordinates": [ 77.58270263671875, 12.963074139604124] },
"properties": { "id": 1, "name": “Vizag" }
}
8
Map Projections and Spatial Reference System :
If there was one thing that makes spatial data ‘special’ - it would have to be Coordinate Reference System
(CRS) or Spatial Reference System (SRS).
- Global Maps
- Country Maps
- Local and Regional Maps
9
Global Maps:
The Equal Earth projection is the preferred and most modern alternative when
creating global maps.
Added benefit of this projection is that it preserves regions, so it's also a good choice
for research on a global scale that involves grids of the same area.
10
Country Maps:
For guidance you can consult the national mapping agency in the region.
The National Spatial Framework (NSF) recommends that India use the CRS EPSG: 7755
for country-level mapping.
11
Local and Regional Maps:
Similar to the Coordinate Reference Systems at country level, most countries have
Coordinate Reference Systems at the state / province level.
The anomalies of each region are reduced, and if the area of interest is located within the
region-this is a reasonable option for both analysis and visualisation.
12
Session Topic: Scalar Volumes, Isosurfacing,
Volume Rendering, Transfer Function Design
13
Session Objective
• By the end of the session the students / audience will able to understand what
is Volume visualization, Direct volume rendering, Transfer functions.
14
SCALAR FIELD
Spatial Data that define a location. It includes location, shape, size, and orientation.
These are in the form of graphic primitives that are usually either points, lines, polygons or
pixels.
A scalar spatial field has a single value associated with each spatially defined cell.
Example:
Fig(a): A scalar field such as temperature or pressure, where intensity of the field is represented by
different hues of colors.
15
SCALAR FIELD
(b) Line integral of scalar field:
-A scalar field has a value associated with each space-point. In a two-dimensional field the value
can be considered as the height of a surface embedded in three dimensions at each point. The line
integral of a curve along this scalar field is equal to the area underneath a curve traced over the
field specified surface.
-
Color mapping is a common technique of scalar visualisation that maps scalar colour data and
displays colours of the computer system. The scalar mapping is implemented by indexing the
colours into a lookup table. Scalar values then act as indices in this lookup table. A variety of
colours are used in the lookup table.
-For example, a vector field in the plane can be visualised as: a series of arrows of a given
magnitude and direction, each attached to a point in the plane. For example, vector fields are also
used to model a moving fluid's speed and position in space, or the intensity and position of any
force, such as magnetic or gravitational force, as it moves from one point to another.
16
There are three major families of idioms for visually encoding scalar fields:
a) slicing
b) isocontours
17
Slicing:
Slicing planes are a universal tools act as simple clipping geometry to provide clear cut-away views of
the data.
-It is a difficult task to effectively view and visually analyse complex 3D data. Occlusions, overlaps, and
projective distortions
— can be major obstacles to accurate and unambiguous analysis of data. Slicing planes are a
common method for solving many of these problems.
To provide direct cut-away views of the results, they act as simple clipping geometry.
18
Isosurfaces: An isosurface is a 3D surface representation of points with equal values in a 3D data
distribution. It is a surface that represents points with a constant value within a volume of space (e.g.
pressure, temperature, velocity, density); in other words, it is a level range of a continuous function
whose domain is 3D-space.
Contour plot:
-display result quantities on a series of lines
-Available for 3D and 2D plot groups
-useful for heat transfer and acoustics applications
Isosurface plots:
-display result quantities on a series of surfaces
-Available for 3D plot groups
-useful for scalar fields
19
Volume Rendering (
https://developer.nvidia.com/sites/all/modules/custom/gpugems/books/GPUGems/gpugems
_ch39.html
)
In scientific visualization, Volume rendering is a set of techniques used to display a 2D projection of a
3D discretely sampled dataset, typically a 3D scalar field. Applications: Medical Science, Engineering,
Earth sciences
Methods:
-Indirect volume rendering
• Isosurface extraction
• Data mapped to geometric primitives
20
Below figure presents a volume rendering of a head dataset using the resulting 2D transfer function, showing
examples of the base materials and these three boundaries: (D) air–tissue, (E) tissue–bone, and (F) air–bone. A
cutting plane has been positioned to show the internal structure of the head.
21
Transfer function :
Finding the right transfer function manually often requires considerable trial and error because
features of interest in the spatial field can be difficult to isolate: uninteresting regions in space
may contain the same range of data values as interesting ones.
22
Direct Volume Rendering
Direct Volume Rendering methods produce images of a 3D volumetric data set without extracting geometric
surfaces directly from the data.
These techniques use an optical model to map data characteristics, such as colour and opacity, to optical
properties.
Optical properties are accumulated along each viewing ray during rendering to form an image of the data.
If we apply conventional animation techniques directly to DVR videos, such as image blending, we can not get
right depth indications and some misleading information will be added.
There are two animation methods to solve these issues , i.e. the image-centric method and the data-centric
method.
In image-centric process, image blending synthesises the intermediate frames of any two successive
keyframes.
23
A realistic solution for simple volumetric data and low-end platforms is given by the image-centric approach.
Since volume rendering operations are nonlinear, however, the image blending method may fail for complex
data.
The intermediate frames are all made direct volume in our data centric system to expose complex structures
and guarantee accurate depth indications.
24
Vector Fields:
A vector valued function that assigns a vector (with direction and magnitude) to any given point.
25
Session Objective
• By the end of the session the students will able to understand what is a Scalar
and Vector fields and their characteristics.
26
27
28
Direct Visualization of Vector Fields
29
30
31
Geometric Flow
32
Integral curves play an important role in visualizing the associated vector field and in
understanding the underlying physics of the flow.
In steady flows, pathlines, streamlines, and streaklines are identical. When the vector field depends
explicitly on time, these curves are distinct from one another.
33
Texture-Based Methods
34
Spot Noise
35
Feature-Based Methods
Feature Extraction
●Features:
●Represent interesting structures or objects in the data
●Extract and visualize them
●The original (large) field is not needed anymore for visualization
●Extraction can (usually) be automated
●Rendering is fast and (usually) interactive
●Medium to high cognitive load
●Typical features:
●Topological features
●Vortices
●Shock waves
●Extremal structures
36
Spatial Uncertainty
Web data visualization
37
Session Objective:
At the conclusion of this session, student will be able to understand spatial modern web-
based environment
38
Objective :
39
-MacEachren et al . ( 2005) notes that "information uncertainty is a dynamic term with multiple
meanings across knowledge domains and application contexts."
As such, there is no widely accepted concept of uncertainty but such a concept is easier to
formulate in the context of this study, the visualisation of uncertainty for spatio-temporal data.
-Uncertainty sources Wittenbrink et al . ( 1995), Pang et al . ( 1997), and Pang (2008) use an
uncertainty visualisation pipeline that explains the processes and stages involved in spatial data
visualisation and its associated uncertainty.
At different stages (acquisition, development, and visualisation) confusion is introduced along the
pipeline, and then propagated towards the end of the pipeline where visualisation and analysis
take place.
40
Web structure data and web usage data
- Pang, A.T., Wittenbrink C.M., Lodha, S.K. (1997) Approaches to uncertainty visualization. The Visual
Computer 1997(13): 370-390.
- Pang, A.T. (2008) Visualizing Uncertainty in Natural Hazards, in Bostrom, A., French, S., Gottlieb, S.
(eds) Risk Assessment, Modeling and Decision Support. Springer-Verlag, Berlin.
- -Longley, P.A., Goodchild, M.F. (2005) Geographical information systems and science. Wiley,
Chichester.
41
42
Sources Longley and Goodchild (2005 ) added description at the start of the pipeline, resulting in a
simulation pipeline (Figure) identifying four stages in which ambiguity can be introduced
-Interpretation. The first step in the process of visualisation is to interpret the natural phenomenon.
Often our understanding of natural processes isn't complete, and at this point confusion or lack of
information can lead to uncertainty.
43
-Transformation.
- The introduction of uncertainty in the visualisation pipeline is exacerbated by the issue that
uncertainty will spread and impact successive stages of the pipeline (Wittenbrink et al . 1996).
44
Properties of Geospatial Data :
-Uncertainty in geospatial datasets is defined across a range of data formats, data types, and
various aspects of data quality. As is shown in Figure, only datasets in a raster format with scalar
attribute values will be included.
46
Session Topic: Web content data multimedia data
visualization
47
Web content data multimedia data
Web distributed system
48
The growing demand for World Wide Web ( WWW) services has made caching of documents a
requirement for reducing download times and reducing Internet traffic.
In order to use caching efficiently, an informed decision must be taken as to which documents
are to be removed from the cache in the event of cache saturation.
This is especially important in a wireless network, where the size of the Network terminal client
cache is limited.
49
The World Wide Web (or Web for short) is very popular and is being used by people all over the
world.
However‚ its utility is being threatened by its ever growing popularity. The WWW has
experienced a dramatic increase in popularity.
Many reports indicate that its growth will continue at an exponential rate. Some of the main
reasons for which a user would opt for Web caching includes:
To increase the bandwidth availability by curbing the transmission of redundant
data.
For reducing network congestion.
For improving response times
50
Architecture Aspects :-
i) Brower caching :- A browser first look for object in cache before requesting to website. Caching frequently
used Web objects speeds up web surfing.
ii) Proxy Caching :- Proxies serve hundreds and thousands. Because of proxy caches aren’t part of the client or
origin server.
iii) Reverse proxy caching :- Typically, reverse proxies are used in front of Web servers. All connections
Coming from the Internet addressed to one of the Web servers are routed through proxy server.
51
Query The web search engine crawling softwares gathers all the data extensively related to the user query
expansion The web crawling process begins with the spiders going to a list of past web addresses from past
Keyword crawls and sitemaps provided by website owners.
When spiders visit the website, they use the links on those websites to link to other pages.
After retrieving information from websites, crawlers store organize the available data (billions of web
Selected image
for an indexing
52
An image indexing required image dataset.
For the collection of public domain images, we employed top 3 image search engines in the world such as
Google, Yahoo and Bing. Keywords are used manually as a “user query”.
However, as per the need for the better retrieval performance, query expansion process has been
used. Next, the automatic system module performs the following tasks for selecting an image for an indexing.
Based on the indexing the text around a keyword and the image file
name, web search engine crawling softwares gathers all the data extensively related to the user query.
It also tries to contextualize the user query to the best of its abilities.
The web crawling process begins with the spiders going to a list of past web addresses from past crawls and
sitemaps provided by website owners.
When spiders visit the website, they use the links on those websites to link to other pages. After retrieving
information from websites, crawlers store organize the available data (billions of web pages) into Search
Index.
In this process search engines uses the knowledge graph for searching into other sorts of similar
images along with keyword information.
53
Every search engine follows their own page rank algorithms. This ranking sorts through the pages stored in
the search index to deliver the best possible results that are relevant to the searched keyword.
Web content data multimedia data visualization
-SKICAT
-Color histogram matching
-Multimedia miner
-Shot boundary detection
60
61
62
63
64
What Is a Dashboard?
A dashboard is a visual display of the most important information needed to achieve one or
more objectives; consolidated and arranged on a single screen so the information can be
monitored at a glance.
• Just as the dashboard of a car provides critical information needed to operate the vehicle at a glance, a BI dashboard
serves a similar purpose, whether you're using it to make strategic decisions for a huge corporation, run the daily
operations of a team, or perform tasks that involve no one but yourself.
• The means is a single‐screen display, and the purpose is to efficiently monitor the information needed to achieve
one's objectives.
Cont..
• Salient feature of Dashboard –
• Quantitative Data
• These measures are often expressed in summary form, most often as totals, slightly less often as averages (such as average
selling price), occasionally as measures of distribution (such as a standard deviation), and rarer still as measures of correlation
(such as a linear correlation coefficient).
• Summary expressions of quantitative data are particularly useful in dashboards, where it is necessary to monitor an array of
business phenomena at a glance. Obviously, the limited real estate of a single screen requires concise communication.
Cont..
• Variations in timing -Measures of what's currently going on can be expressed in a variety of timeframes. A few typical examples
include:
• This year to date
• This week to date
• This quarter to date
• Yesterday
• Enrichment through comparison- These measures can be displayed by themselves, but it is usually helpful to compare them to
one or more related measures to provide context and thereby enrich their meaning.
• These comparisons are often expressed graphically to clearly communicate the differences between the values, which might
not leap out as dramatically through the use of text alone.
• Multiple instances of a measure, each representing a categorical subdivision of the measure (for example, sales subdivided
into regions or a count of orders subdivided into numeric ranges in the form of a frequency distribution)
• Temporal instances of a measure (that is, a time series, such as monthly instances of the measure)
• Enrichment through evaluation - Because with a dashboard a great deal of data must be evaluated quickly, it also is quite
useful to explicitly declare whether something is good or bad.
• Such evaluative information is often encoded as special visual objects (for example, a traffic light) or as visual attributes (for
example, by displaying the measure in bright red to indicate a serious condition).
• When designed properly, simple visual indicators can clearly alert users to the state of particular measures without altering
the overall design of the dashboard.
• Evaluative indicators need not be limited to binary distinctions between good and bad, but if they exceed the limit of more
than a few distinct states (for example, very bad, bad, acceptable, good, and very good), they run the risk of becoming too
Cont..
• Non-Quantitative Data –
• Although most information that typically finds its way onto a dashboard is quantitative, some types of
non‐quantitative data, such as simple lists, are fairly common as well. Here are a few examples:
• Top 10 customers
• Issues that need to be investigated
• Tasks that need to be completed
• People who need to be contacted
• Another type of non‐quantitative data occasionally found on dashboards relates to schedules, including
tasks, due dates, the people responsible, and so on.
• This is common when the job that the dashboard supports involves the management of projects or
processes.
Dashboard Design issues
• Exceeding the boundaries of a single screen
• Supplying inadequate context for the data
• Displaying excessive detail or precision
• Choosing a deficient measure
• Choosing inappropriate display media
• Introducing meaningless variety
• Using poorly designed display media
• Encoding quantitative data inaccurately
• Arranging the data poorly
• Highlighting important data ineffectively or not at all
• Cluttering the display with useless decoration
• Misusing or overusing color
• Designing an unattractive visual displ
Exceeding the boundaries of a single screen
Exceeding the boundaries of a single screen
Supplying inadequate context for the data
• For instance, individual numbers on a dashboard are stored as discrete chunks, but a well‐designed
graphical pattern, such as the pattern formed by one or more lines in a line graph, can represent a great
deal of information as a single chunk.
• This is one of the great advantages of graphs (when used appropriately and skillfully designed) over
text.
• Dashboards should be designed in a way that supports optimal chunking together of information so
that it can be perceived and understood most efficiently, in big visual gulps.
Cont..
Visually encoding data for rapid perception –
• Preattentive processing, the early stage of visual perception that rapidly occurs below the level of
consciousness, is tuned to detect a specific set of visual attributes.
• Attentive processing is sequential, and therefore much slower.
• Because the complex shapes of the numbers are not attributes that we perceive pre-attentively.
• Simple shapes such as circles and squares are pre-attentively perceived, but the shapes of numbers are
too elaborate.
Cont..
Visually encoding data for rapid perception
• In Information Visualization: Perception for Design, Colin Ware suggests that the pre-attentive attributes of
visual perception can be organized into four categories: color, form, spatial position, and motion.
Color must be used with a full awareness of context. We not only want data to be fully legible, but also to
appear the same when we wish it to appear the same and different when we wish it to appear different.
• Attributes of Form
• The most common application of orientation is in the form of italicized text, which is text that has
been reoriented from straight up and down to slightly slanted to the right.
• In dashboard design, the attribute of line length is most useful for encoding quantitative values as
bars in a bar graph. Line width, on the other hand, can be useful for highlighting purposes.
• The relative sizes of objects that appear on a dashboard can be used to visually rank their
importance.
• For instance, larger titles for sections of content, or larger tables, graphs, or icons, can be used to
declare the greater importance of the associated data.
• Attributes of Position
• The preattentive attribute 2‐D position is the primary means that we use to encode quantitative
data in graphs (for example, the position of data points in relation to a quantitative scale).
• This isn't arbitrary. Of all the preattentive attributes, differences in 2‐D position are the easiest and
most accurate to perceive.
• Attributes of Position
• The pre-attentive attribute 2‐D position is the primary means that we use to encode quantitative data in graphs
(for example, the position of data points in relation to a quantitative scale).
• Of all the pre-attentive attributes, differences in 2‐D position are the easiest and most accurate to perceive.
• Attributes of motion
• Flicker was chosen as the means to help us locate the cursor because it is a powerful attention‐getter.
• Evolution has equipped us with a heightened sensitivity to something that suddenly appears within our field of
vision.
• This is especially true for dashboards that are constantly updated with real‐time data and are used to monitor
operations that require immediate responses.
• Encoding Quantitative Versus Categorical Data
Cont..
• Limits to Perceptual Distinctness-
• When designing dashboards, bear in mind that there is a limit to the number of distinct expressions of a single
preattentive attribute that we can quickly and easily distinguish.
• For example, when using varying intensities of the color gray to distinguish data sets in a line graph, you must
make sure that the color of each line is different enough from those closest in color to it to clearly stand out as
distinct.
• When you place enough perceptual distance between the color intensities of the separate lines to make them
sufficiently distinct, there's a practical limit of about five to the number of distinct expressions that are available
across the gray scale.
Cont..
• Using Vivid and Subtle Colors Appropriately
• Proximity
• Closure
• Similarity
• Continuity
• Enclosure
• Connection
The Principle of Proximity
It says objects that are located near one
another as belonging to the same group.
In the first fig, based on their relative
locations, we automatically see the dots as
belonging to three separate groups. This is
the simplest way to link data that you want to
be seen together.
• If you understand the how and why, when you're faced with new challenges you'll be able to
determine whether or not the principles apply and how to adapt them to the new circumstances.
• If you've simply been told that something works in a specific situation, you'll be stuck when faced
with conditions that are even slightly different.
• Characteristics of a WellDesigned Dashboard –
• The fundamental challenge of dashboard design involves squeezing a great deal of useful and often
disparate information into a small amount of space, all the while preserving clarity.
• There are some other challenges too such as selecting the right data in the first place
• Limited to a single screen to keep all the data within eye span, dashboard real estate is extremely
valuable: you can't afford to waste an inch.
• For example, cockpit of a commercial jet. Years of effort went into its design to ensure that despite the
many things pilots must monitor, they can see everything that's going on at a glance.
• When designing dashboards, you must include only the information that you absolutely need, you must
condense it in ways that don't decrease its meaning, and
• you must display it using visual display mechanisms that, even when quite small, can be easily read and
understood.
• Well‐designed dashboards deliver information that is:
Above figure shows a table and a graph where the non‐data ink encoded as red.
• Formally,
• A large share of ink on a graphic should present data‐information, the ink changing as the data
change. Data‐ink is the non‐erasable core of a graphic, the non‐ redundant ink arranged in
response to variation in the numbers represented. Then,
• Data‐ink ratio = data‐ink / total ink used to print the graphic
= proportion of a graphic's ink devoted to the non‐redundant display of data‐
information
= 1.0 ‐ proportion of a graphic that can be erased without loss of data‐information.
Edward R. Tufte then applies it as a principle of design: "Maximize the data‐ink ratio, within reason. Every bit of ink on a graphic
requires a reason. And nearly always that reason should be that the ink presents new information.
This principle applies perfectly to the design of dashboards, with one simple revision: because dashboards are always displayed
on computer screens-
“Across the entire dashboard, non‐data pixels any pixels that are not used to display data, excluding a blank background
should be reduced to a reasonable minimum. ”
• Much of visual dashboard design revolves around two fundamental goals:
1. Reduce the non‐data pixels.
2. Enhance the data pixels.
Eliminate all unnecessary non data pixel
• The next few figures provide examples of non‐data pixels that often find their way onto dashboards but
can usually be eliminated without loss.
Grid lines in graphs. Grid lines in graphs are rarely useful. They
are one of the most prevalent forms of distracting non‐data
pixels found in dashboards.
Grid lines in tables, which divide the data into individual cells or divide either the rows or the columns, when white
space alone would do the job as well. Grid lines in tables can make otherwise simple displays difficult to look
at
Fill colors in the alternating rows of a table to delineate them when white space alone would work as well. Fill colors
should be used to delineate rows in a table only when this is necessary to help viewers' eyes track across the
rows.
Complete borders around the data region of a graph when one
horizontal and one vertical axis would sufficiently define the space.
A complete border around the data region of a graph should be
avoided when a single set of axes would adequately define the
space.
• For instance, when data is tightly packed, sometimes it is necessary to use lines or fill colors to delineate one
section from another, rather than white space alone.
• In these cases, rather than eliminating these useful non‐data pixels, you should simply mute them visually so
they don't attract attention.
• Focus should always be placed on the information itself, not on the design of the dashboard, which should be
almost invisible.
• The trick is to de‐emphasize these non‐data pixels by making them just visible enough to do their job, but no
more.
Few examples of non‐data pixels that are either always or occasionally useful. Examples are shown in two
ways: 1) a version that is too visually prominent, which illustrates what you should avoid; and 2) a version
that is just visible enough to do the job, which is the objective.
This dashboard gives navigational and data selection controls far more
dominance and space than they deserve.
Eliminate all unnecessary data pixels
• Elimination of unnecessary data pixels is achieved not only through the complete removal of less relevant data
• But also by condensing data through the use of summaries and exceptions, so that the level of detail that is
displayed doesn't exceed what's necessary.
• For most applications, it would be absurd to include detailed information such as transaction‐level sales
data on a dashboard some level of summarization is needed, and it is often up to you to determine what
that level is.
• You might choose to display a single quarter‐to‐date value, a value per region, or a value per month, just to
name a few possibilities.
• Consider the below example where, three time‐series graphs displaying public transportation rider statistics
contain three levels of detail: daily for the current month, monthly for the current year, and yearly for the last
10 years.
• Above example displaying a summarizing technique.
• This technique involves multi‐foci displays.
• When it is useful to display historical context for a measure, such as the last 12 months or the last 5
years, often information that is more distant from the present is less important than recent history.
• In such cases, there is no reason to display the full range of data at the same level of detail.
• For instance, you might want to display the current month as daily measures, the preceding 12 months
as monthly measures, and the preceding 4 years as annual measures.
Highlight the most important data pixels that remain
• All the information that finds its way onto a dashboard should be important, but not all data is created equal:
some data is more important than other data. The most important information can be divided into two
categories:
• Information that is always important
• Information that is only important at the moment
• Considering the entire collection of information that belongs on a dashboard, you should be able to prioritize it
according to what is usually of greatest interest to viewers.
• For instance, a dashboard that serves the needs of a corporation's executives might display several categories of
financial, sales, and personnel data.
• On the whole, however, the executives usually care about some key measures more than others.
• The other category of especially important information is that which is important only when it reveals
something out of the ordinary.
• A measure that has fallen far behind its target, an opportunity that has just arisen and won't last for long, or an
operational condition that demands immediate attention all fall into this category.
• These two categories of important information require different means of highlighting on a dashboard.
• The first category information that is always important can be emphasized using static means, but the
second category information that is important only at the moment requires a dynamic means of
emphasis.
• Except for content that demands attention, use less saturated colors such as those that are predominant in
nature (for example, the colors of the earth and sky).
• Use a barely discernable pale background color other than pure white to provide a more soothing, less starkly
contrasting surface on which the data can reside.
• Choose High Resolution for Clarity:
• The high density of information that typically appears on a dashboard requires that the graphical images be
displayed with exceptional visual clarity.
• Images with poor resolution are hard to read, which slows down the process of scanning the dashboard for
information (and is just plain annoying).
• Visual clarity does not require fancy shading or photo‐realism; simple high‐resolution images will do.
• Choose the Right Text: Final recommendation regarding dashboard aesthetics involves the use of text.
• Use the most legible font you can find.
• No need to set a mood or reinforce a theme by using an unusual font.
• Ornate text might be appropriate for a poster advertising the circus, but not for a dashboard.
• You want a font that can be read the fastest with the least amount of strain on the eyes.
• Find one that works and stick with it throughout the dashboard.
• You can use a different font for headings to help them stand out if you wish, but that's the practical
limit.
• Below figure illustrates a few of the good and bad choices that are available.