0% found this document useful (0 votes)
21 views31 pages

images and videos formats

Uploaded by

Meenakshi Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views31 pages

images and videos formats

Uploaded by

Meenakshi Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Unit-2

IMAGES-VIDEOS

● An image consists of a rectangular array of dots called pixels. The size of the image is
specified in terms of width X height, in numbers of the pixels.
● The physical size of the image, in inches or centimeters, depends on the resolution of the
device on which the image is displayed. The resolution is usually measured in DPI (Dots
Per Inch).
● An image will appear smaller on a device with a higher resolution than on one with a
lower resolution. For color images, one needs enough bits per pixel to represent all the
colors in the image. The number of the bits per pixel is called the depth of the image.

Image data types


➢ Images can be created by using different techniques of representation of data called data
type like monochrome and colored images. Monochrome image is created by using single
color whereas colored image is created by using multiple colors. Some important data
types of images are following:

1-bit images-

➢ An image is a set of pixels. Note that a pixel is a picture element in digital image.
In 1-bit images, each pixel is stored as a single bit (0 or 1). A bit has only two
states either on or off, white or black, true or false.
➢ Therefore, such an image is also referred to as a binary image, since only two
states are available.
➢ 1-bit image is also known as 1-bit monochrome images because it contains one
color that is black for off state and white for on state.
➢ A 1-bit image with resolution 640*480 needs a storage space of 640*480 bits.
640 x 480 bits. = (640 x 480) / 8 bytes = (640 x 480) / (8 x 1024) KB= 37.5KB.
The clarity or quality of 1-bit image is very low.

8-bit Gray level images-

➢ Each pixel of 8-bit gray level image is represented by a single byte (8 bits).
Therefore each pixel of such image can hold 28=256 values between 0 and 255.
➢ Therefore each pixel has a brightness value on a scale from black (0 for no
brightness or intensity) to white (255 for full brightness or intensity).
➢ For example, a dark pixel might have a value of 15 and a bright one might be 240.
➢ A grayscale digital image is an image in which the value of each pixel is a single
sample, which carries intensity information.
➢ Images are composed exclusively of gray shades, which vary from black being at
the weakest intensity to white being at the strongest.
➢ Grayscale images carry many shades of gray from black to white. Grayscale
images are also called monochromatic, denoting the presence of only one (mono)
color (chrome).
➢ An image is represented by bitmap. A bitmap is a simple matrix of the tiny dots
(pixels) that form an image and are displayed on a computer screen or printed.
➢ A 8-bit image with resolution 640 x 480 needs a storage space of 640 x 480
bytes=(640 x 480)/1024 KB= 300KB. Therefore an 8-bit image needs 8 times
more storage space than 1-bit image.

24-bit color images -

➢ In 24-bit color image, each pixel is represented by three bytes, usually


representing RGB (Red, Green and Blue).
➢ Usually true color is defined to mean 256 shades of RGB (Red, Green and Blue)
for a total of 16777216 color variations.
➢ It provides a method of representing and storing graphical image information an
RGB color space such that a colors, shades and hues in large number of variations
can be displayed in an image such as in high quality photo graphic images or
complex graphics.
➢ Many 24-bit color images are stored as 32-bit images, and an extra byte for each
pixel used to store an alpha value representing special effect information.
➢ A 24-bit color image with resolution 640 x 480 needs a storage space of 640 x 480
x 3 bytes = (640 x 480 x 3) / 1024=900KB without any compression. Also 32-bit
color image with resolution 640 x 480 needs a storage space of 640 x 480 x 4
bytes= 1200KB without any compression.
Disadvantages
○ Require large storage space
○ Many monitors can display only 256 different colors at any one
time. Therefore, in this case it is wasteful to store more than 256
different colors in an image.

8-bit color images -

➢ 8-bit color graphics is a method of storing image information in a computer's


memory or in an image file, where one byte (8 bits) represents each pixel.
➢ The maximum number of colors that can be displayed at once is 256.
➢ 8-bit color graphics are of two forms. The first form is where the image stores not
color but an 8-bit index into the color map for each pixel, instead of storing the
full 24-bit color value.
➢ Therefore, 8-bit image formats consists of two parts: a color map describing what
colors are present in the image and the array of index values for each pixel in the
image.
➢ In most color maps each color is usually chosen from a palette of 16,777,216
colors (24 bits: 8 red, 8green, 8 blue).
➢ The other form is where the 8-bits use 3 bits for red, 3 bits for green and 2 bits for
blue.
➢ This second form is often called 8-bit true color as it does not use a palette at all.
When a 24-bit full color image is turned into an 8-bit image, some of the colors
have to be eliminated, known as color quantization process.
➢ A 8-bit color image with resolution 640 x 480 needs a storage space of 640 x 480
bytes=(640 x 480) / 1024KB= 300KB without any compression.

Color lookup tables


➢ A color loop-up table (LUT) is a mechanism used to transform a range of input colors
into another range of colors.
➢ Color look-up table will convert the logical color numbers stored in each pixel of video
memory into physical colors, represented as RGB triplets, which can be displayed on a
computer monitor.
➢ Each pixel of image stores only index value or logical color number. For example if a
pixel stores the value 30, the meaning is to go to row 30 in a color look-up table (LUT).
The LUT is often called a Palette.

Characteristic of LUT are following:

● The number of entries in the palette determines the maximum number of colors
which can appear on screen simultaneously.
● The width of each entry in the palette determines the number of colors which the
wider full palette can represent.

A common example would be a palette of 256 colors that is the number of entries is 256 and thus
each entry is addressed by an 8-bit pixel value. Each color can be chosen from a full palette, with
a total of 16.7 million colors that is the each entry is of 24 bits and 8 bits per channel which sets
the total combinations of 256 levels for each of the red, green and blue components 256 x 256 x
256 =16,777,216 colors.

Image file formats


● GIF- Graphics Interchange Formats-
➔ The GIF format was created by Compuserve. It supports 256 colors.
➔ GIF format is the most popular on the Internet because of its compact size. It is
ideal for small icons used for navigational purpose and simple diagrams.
➔ GIF creates a table of up to 256 colors from a pool of 16 million. If the image has
less than 256 colors, GIF can easily render the image without any loss of quality.
➔ When the image contains more colors, GIF uses algorithms to match the colors of
the image with the palette of optimum set of 256 colors available. Better
algorithms search the image to find and the optimum set of 256 colors.
➔ Thus GIF format is lossless only for the image with 256 colors or less. In case of a
rich, true color image GIF may lose 99.998% of the colors. GIF files can be saved
with a maximum of 256 colors. This makes it is a poor format for photographic
images.
➔ GIFs can be animated, which is another reason they became so successful. Most
animated banner ads are GIFs.
➔ GIFs allow single bit transparency that is when you are creating your image, you
can specify which color is to be transparent. This provision allows the background
colors of the web page to be shown through the image.
● JPEG- Joint Photographic Experts Group-
➔ The JPEG format was developed by the Joint Photographic Experts Group.
➔ JPEG files are bitmapped images. It store information as 24-bit color.
➔ This is the format of choice for nearly all photograph images on the internet.
Digital cameras save images in a JPEG format by default. It has become the main
graphics file format for the World Wide Web and any browser can support it
without plug-ins. In order to make the file small,
➔ JPEG uses lossy compression. It works well on photographs, artwork and similar
materials but not so well on lettering, simple cartoons or line drawings.
➔ JPEG images work much better than GIFs. Though JPEG can be interlaced, still
this format lacks many of the other special abilities of GIFs, like animations and
transparency, but they really are only for photos.
● PNG- Portable Network Graphics-
➔ PNG is the only lossless format that web browsers support. PNG supports 8 bit,
24 bits, 32 bits and 48 bits data types.
➔ One version of the format PNG-8 is similar to the GIF format. But PNG is the
superior to the GIF.
➔ It produces smaller files and with more options for colors.
➔ It supports partial transparency also. PNG-24 is another flavor of PNG, with
24-bit color supports, allowing ranges of color akin to high color JPEG.
➔ PNG-24 is in no way a replacement format for JPEG because it is a lossless
compression format. This means that file size can be rather big against a
comparable JPEG. Also PNG supports for up to 48 bits of color information.
● TIFF- Tagged Image File Format-
➔ The TIFF format was developed by the Aldus Corporation in the 1980 and was
later supported by Microsoft.
➔ TIFF file format is widely used bitmapped file format. It is supported by many
image editing applications, software used by scanners and photo retouching
programs.
➔ TIFF can store many different types of image ranging from 1 bit image, grayscale
image, 8 bit color image, 24 bit RGB image etc.
➔ TIFF files originally use lossless compression. Today TIFF files also use lossy
compression according to the requirement. Therefore, it is a very flexible format.
This file format is suitable when the output is printed.
➔ Multi-page documents can be stored as a single TIFF file and that is way this file
format is so popular. The TIFF format is now used and controlled by Adobe.
● BMP- Bitmap-
➔ The bitmap file format (BMP) is a very basic format supported by most Windows
applications.
➔ BMP can store many different type of image: 1 bit image, grayscale image, 8 bit
color image, 24 bit RGB image etc. BMP files are uncompressed. Therefore, these
are not suitable for the internet.
➔ BMP files can be compressed using lossless data compression algorithms.
● EPS- Encapsulated Postscript-
➔ The EPS format is a vector based graphic. EPS is popular for saving image files
because it can be imported into nearly any kind of application.
➔ This file format is suitable for printed documents. Main disadvantage of this
format is that it requires more storage as compare to other formats.
● PDF- Portable Document Format-
➔ PDF format is vector graphics with embedded pixel graphics with many
compression options.
➔ When your document is ready to be shared with others or for publication.
➔ This is only format that is platform independent. If you have Adobe Acrobat you
can print from any document to a PDF file. From illustrator you can save as .PDF.
● EXIF- Exchange Image File-
➔ Exif is an image format for digital cameras.
➔ A variety of tage are available to facilitate higher quality printing, since
information about the camera and picture - taking condition can be stored and
used by printers for possible color correction algorithms.
➔ It also includes specification of file format for audio that accompanies digital
images.
● WMF- Windows MetaFile-
➔ WMF is the vector file format for the MS-Windows operating environment.
➔ It consists of a collection of graphics device interface function calls to the
MS-Windows graphics drawing library.
➔ Metafiles are both small and flexible, These images can be displayed properly by
their proprietary softwares only.
● PICT-
➔ PICT images are useful in Macintosh software development, but you should avoid
them in desktop publishing.
➔ Avoid using PICT format in electronic publishing-PICT images are prone to
corruption.
● Photoshop-

This is the native Photoshop file format created by Adobe. You can import this format
directly into most desktop publishing applications.

Color Models in Computer Graphics


● Color model is a 3D color coordinate system to produce all range of color through
the primary color set.
● There are millions of colors used in computer graphics.
● The light displays the color.
● A Color model is a hierarchical system in which we can create every color by
using RGB (Red, Green, Blue) and CMYK (Cyan, Magenta, Yellow, Black)
models.
● We can use different colors for various purposes.
● The total number of colors displayed by the monitor depends on the storage
capacity of the video controller card.

Types of Color Model


The basic color model is divided into two parts-
● Additive Color Model(RGB COLOR MODEL):
➔ It is also named as “RGB model.” RGB stands for Red, Green, Blue.
➔ The Additive color model uses a mixture of light to display colors. The
perceived color depends on the transmission of light. It is used in digital
media.

➔ The RGB color model is an additive color model in which red, green and blue
light are added together in various ways to reproduce a broad array of colors.
➔ The name of the model comes from the initials of the three additive primary
colors red, green, and blue.
➔ The main purpose of the RGB color model is for the sensing, representation, and
display of images in electronic systems, such as televisions and computers,
though it has also been used in conventional photography
➔ . Before the electronic age, the RGB color model already had a solid theory
behind it, based in human perception of colors.
➔ For Example- Computer Monitor, Television etc.

● Subtractive Color Model(CMYK COLOR MODEL):


➔ It is also named as “CMYK Model.” CMYK stands for Cyan,
Magenta, Yellow, and Black.
➔ The Subtractive model uses a reflection of light to display the colors. The
perceived color depends on the reflection of light.
➔ CMYK refers to the four ink plates used in some color printing: cyan, magenta,
yellow, and key (black).
➔ The CMYK model works by partially or entirely masking colors on a lighter,
usually white, background.
➔ The ink reduces the light that would otherwise be reflected. Such a model is
called subtractive because inks “subtract” the colors red, green, and blue from
white light.
➔ White light minus red leaves cyan, white light minus green leaves magenta, and
white light minus blue leaves yellow.

➔ In the CMYK model, white is the natural color of the background, while black
results from a full combination of colored inks.
➔ To save cost on ink, and to produce deeper black tones, unsaturated and dark
colors are produced by using black ink instead of the combination of cyan,
magenta, and yellow.
➔ The CMYK model uses printing inks.
➔ For Example- Paint, Pigments, and color filter etc.

Advantages:
1. Easy to Implement.
2. It uses color space for applications.
3. No transformation for data display.

Disadvantages:
1. We cannot transfer the color values from one to another device.
2. Complex to determine the particular color.

RGB to CMY conversion


The conversion from RGB to CMY is done using this method.

Consider you have an color image , means you have three different arrays of RED, GREEN
and BLUE. Now if you want to convert it into CMY, here’s what you have to do. You have
to subtract it by the maximum number of levels – 1. Each matrix is subtracted and its
respective CMY matrix is filled with result.
Color Look-Up Table
● The color Look Up table is a technique or process to convert a range of input
colors into another range of colors. It is also called as “CLUT.”
● The color Lookup table has existed in the graphics card. Graphics Card is also
called “Display Adapter.”
● The Color Look-Up table provides us various colors that are used to modify the
color of the objects.
● Either we can use the colors available in the palette, or we can create the colors of
our choice in the color window.
● In image processing, the lookup table is used to change the input into the more
required output format. The Color Look-Up table is used to store and index the
color value of an image.

Look Up File: The Look-Up file is a two-dimensional table that is used to contain the
data. The Look Up data is stored in a disk file.

Color Palettes: The color palettes are defined as mathematical tables used to determine
the color of the pixel that is displayed on the screen.
In the Macintosh Operating system, it is known as “Color Look-Up table.”
In Windows operating system, it is known as the “Color palette.”
It is used to stores a set of bytes instead of the color of the image.

Advantages:
1. Easy to modify.
2. Space Efficient.

Disadvantages:
1. It does not maintain any changes in the history.
2. There is a need to determine and maintain the reference.

Direct Coding
● “Direct Coding is a technique or process which is used to provide a certain amount
of memory storage space for a pixel to encode the pixel with the color.”

For Example-

● If we assign one bit for each primary color to 3-bit for each pixel.
● This 3-bit representation allows the two intensity levels of each primary to
separate: 0(Off) or 1(On),then each pixel can occupy one color out of eight colors
that are similar to the corner of the RGB color cube.

Bit1: Red Bit2: Green Bit: Blue Color Name

0 0 0 Black

0 0 1 Blue

0 1 0 Green

0 1 1 Cyan

1 0 0 Red

1 0 1 Magenta

1 1 0 Yellow

1 1 1 White
● Mostly 3 bytes, or 24 bits per pixel are used in industries and companies, with 1
byte for each primary color. We can allot each primary color to have 265 different
intensity levels, similar to binary values from 00000000 to 11111111.
● The color of the pixel can be 265 x 265 x 265 or 16.7 million. The representation
of black and white, the grayscale image, needs just 1 bit per pixel.
● The bit value 0 represents black, and 1 represents white. The direct coding
technique is simple, and it also supports various applications.

Differences between RGB and CMYK color schemes:

RGB Color Scheme CMYK Color Scheme

Used for digital works. Used for print works.

Primary Colors: Cyan, Magenta,


Primary colors: Red, Green, Blue
Yellow, Black

Additive Type Mixing Subtractive Type Mixing.

Colors of images are more vibrant Colors are less vibrant.


RGB Scheme has wider range of colors than CMYK has lesser range of colors
CMYK than RGB.

file formats:- JPEG, PNG, GIF etc. file formats:- PDF, EPS etc

Basically it is used for online logos, online ads, Basically it is used for business
digital graphics, photographs for website, social cards, stationary, stickers, posters,
media, or apps etc. brochures etc.

Video signal
A video signal is a signal produced by the video adapter that allows a
display device, such as a computer monitor, to display a picture.

What are the different types of video signals?


There are three types of video signals as follows:
● Composite Video
● Component Video
● S-Video
Composite Video
● The first analog video color format, which uses one channel and a single cable (the audio
tracks transmit in separate channels and cables). All old analog TVs and many digital
TVs have composite video inputs (LG example below)
● The original TV standard combined (Y) and both color signals (U) and (V) into one
channel, which uses one cable and is known as "composite video."
● Composite video was created when color was added to black & white TV in 1954. Two
color signals (U and V) were multiplexed with the original monochrome signal (Y) and
transmitted in the same TV channel.
● Composite Video Is Yellow
● The yellow RCA jack is the composite video socket found on video devices.
● In composite video, three source signals are combined with sync pulses to form a
composite video signal. The three source signals are referred as YUV in which Y
represents the brightness of the picture and it also includes the synchronizing pulses.
● The color information is carried between U and V. However, two orthogonal phases of a
color carrier signal are mixed with them in the first place to form a signal called as
chrominance. The Y signal and the UV signal are then combined together.
● The signals are compressed and then channeled through a single wire.
● The chrominance and luminance components can be separated at the receiver end and
then the two color components can be further recovered.
● When connecting to TVs or VCRs, Composite Video uses only one wire and video color
signals are mixed, not sent separately.
● The audio and sync signals are additions to this one signal. ‹ Since color and intensity are
wrapped into the same signal Some interference between the luminance and chrominance
signals is inevitable (unavoidable).

S-Video
● S-video was one of a number of enhancements in bringing the signal from the video
cassette player to TVs, and separated video signal (luminance) and color signal
(chrominance).
● S-video is a technology for transmitting video signals over a cable by dividing the video
information into two separate signals: one for color (chrominance), and the other for
brightness (luminance).
● When sent to a television, this produces sharper images than Composite Video, where the
video information is transmitted as a single signal over one wire. This is because
televisions are designed to display separate Luminance (Y) and Chrominance (C) signals.
● Computer monitors are designed for RGB (short for Red, Green, Blue monitor) signals.
Most digital video devices such as digital cameras and game players produce video in
RGB format. The images are clearer when displayed on a computer monitor. When
displayed on a standard television, however, they look better in S-Video format than in
Composite Video format.
● To use S-Video, the device sending the signals must support S-Video output and the
device receiving the signals must have an S-Video input jack. Then you need an S-Video
cable to connect the two devices.
● S-Video cable doesn't always come standard with a TV, and usually must be purchased
separately.
● S-Video cables carry four or more wires wrapped together in an insulated sleeve, with
S-Video connectors at either end. It is only for video and requires separate audio cables,
but it provides a slightly better picture than a composite video cable.
● Like composite video, S-video connectors are widely used on VCRs, DVD players and
receivers. The audio for both composite video and S-video uses common left/right stereo
connections.
● As a result, there is less crosstalk between the color information and the crucial
gray-scale information.
● S-Video cables are used for computer-to-TV output for business or home use.

Component Video
● Component video is a video signal that has been split into two or more components. In
popular use, it refers to a type of analog video information that is transmitted or stored as
three separate signals.
● Component analog video signals do not use R, G, and B components but rather a
colorless component, termed luma, which provides brightness information (as in
black-and-white video).
● This combines with one or more color-carrying components, termed chroma, that give
only color information.
● In component video, the luminance (Y) and two color difference signals (U and V or I
and Q) are separated into three separate analog signals that can be transmitted over three
separate wires.
● Component video is used in professional video production and provides the best quality
and the most accurate reproduction of colors.
● Component Video gives the best color reproduction since there is no crosstalk between
the three channels. Component video requires more bandwidth and good synchronization
of the three components.
● Component video cables come in three-wire sets shown below.

Analog video:

● Analog video is a video signal transferred by an analog signal. When combined in to

one channel, it is called composite video as is the case, among others with NTSC,

PAL and SECAM.

● Analog video may be carried in separate channels, as in two channel S - Video (YC) and

multi - channel component video formats.

● Analog video is used in both consumer and professional television production applications.
● However, digital video signal formats with higher quality have been adopted, including serial

digital interface (SDI), Firewire (IEEE 1394), Digital Visual Interface (DVI) and High -

Definition Multimedia Interface (HDMI).

● Most TV is still sent and received as an analog signal. Once the electrical signal is received,

we may assume that brightness is at least a monotonic function of voltage, if not necessarily

linear, because of gamma correction.

Video Scanning Method:-

● An analog signal f(t) samples a time - varying image. So - called progressive scanning traces

through a complete picture (a frame) row - wise for each time interval. A high - resolution

computer monitor typically uses a time interval of 1/72 second.

Interlaced raster scan


● In TV and in some monitors and multimedia standards, another system, interlaced scanning,

is used. Here, the odd - numbered lines are traced first, then the even - numbered lines. This

results in "odd" and "even" fields — two fields make up one frame.

● In fact, the odd lines (starting from 1) end up at the middle of a line at the end of the odd

field, and the even scan starts at a half - way point. The following figure shows the scheme

used. First the solid (odd) lines are traced— P to Q, then R to S, and so on, ending at T —

then the even field starts at U and ends at V. The scan fines are not horizontal because a small

voltage is applied, moving the electron beam down over time.

● Interlacing was invented because, when standards were being defined, it was difficult to

transmit the amount of information in a full frame quickly enough to avoid flicker. The

double number of fields presented to the eye reduces perceived flicker.

● Because of interlacing, the odd and even lines are displaced in time from each other. This is

generally not noticeable except when fast action is taking place onscreen, when blurring may
occur. For example, in the video in the following figure, the moving helicopter is blurred

more than the still background.

● Since it is sometimes necessary to change the frame rate, resize, or even produce stills from

an interlaced source video, various schemes are used to de - interlace it. The simplest de -

interlacing method consists of discarding one field and duplicating the scan lines of the other

field, which results in the information in one field being lost completely. Other, more

complicated methods retain information from both fields.

● CRT displays are built like fluorescent lights and must flash 50 to 70 times per second to

appear smooth. In Europe, this fact is conveniently tied to their 50 Hz electrical system, and

they use video digitized at 25 frames per second (fps); in North America, the 60 Hz electric

system dictates 30 fps.

● The jump from Q to R and so on is called the horizontal retrace, during which the electronic

beam in the CRT is blanked. The jump from T to U or V to P is called the vertical retrace.

Analog Video Standards: NTSC, PAL, SECA

NTSC Video

● NTSC, named for the National Television System Committee, is the analog television system

that is used in most of North America, parts of South America (except Brazil, Argentina,

Uruguay, and French Guiana), Myanmar, South Korea, Taiwan, Japan, the Philippines, and

some Pacific island nations and territories.

● The first NTSC standard was developed in 1941 and had no provision for color television. In

1953 a second modified version of the NTSC standard was adopted, which allowed color

television broadcasting compatible with the existing stock of black - and - white receivers.
● NTSC was the first widely adopted broadcast color system and remained dominant where it

had been adopted until the first decade of the 21st century, when it was replaced with digital

ATSC.

● Digital broadcasting permits higher - resolution television, but digital standard definition

television in these countries continues to use the frame rate and number of lines of resolution

established by the analog NTSC standard;

● systems using the NTSC frame rate and resolution (such as DVDs) are still referred to

informally as "NTSC". NTSC baseband video signals are also still often used in video

playback (typically of recordings from existing libraries using existing equipment) and in

CCTV and surveillance video systems.

Video raster, including retrace and sync data


Samples per line for various analog video formats

● Different video formats provide different numbers of samples per line, as listed in the above

table. Laser disks have about the same resolution as Hi - 8. (In comparison, mini DV 1/4 -

inch tapes for digital video are 480 lines by 720 samples per line.)

PAL Video

● PAL (Phase Alternating Line) is a TV standard originally invented by German scientists. It

uses 625 scan lines per frame, at 25 frames per second (or 40 msec / frame), with a 4 : 3

aspect ratio and interlaced fields.

● Its broadcast TV signals are also used in composite video. This important standard is widely

used in Western Europe, China, India and many other parts of the world.

● PAL uses the YUV color model with an 8 MHz channel, allocating a bandwidth of 5.5 MHz

to Y and 1.8 MHz each to U and V.

● The color subcarrier frequency is fsc ≈ 4.43 MHz. To improve picture quality, chroma signals

have alternate signs (e.g., +U and — U) in successive scan lines; hence the name "Phase

Alternating Line.

SECAM Video
● SECAM, which was invented by the French, is the third major broadcast TV standard.

SECAM stands for Systeme Electronique Couleur Avec Memorie.

● SECAM also uses 625 scan lines per frame, at 25 frames per second, with a 4:3 aspect ratio

and interlaced fields.

● The original design called for a higher number of scan lines (over 800), but the final version

settled for 625.

● SECAM and PAL are similar, differeing slightly in their color coding scheme. In SECAM, U

and V signals are modulated using separate color subcarriers at 4.25 MHz and 4.41 MHz,

respectively.

● They are sent in alternate lines - that is, only one of the U or V signals will be sent on each

scan line.
Digital video

● Digital video comprises a series of orthogonal bitmap digital images displayed in rapid

succession at a constant rate. In the context of video these images are called frames. We

measure the rate at which frames are displayed in frames per second (FPS).

● Since every frame is an orthogonal bitmap digital image it comprises a raster of pixels. If it

has a width of W pixels and a height of Hpixels we say that the frame size is WxH.

● Pixels have only one property, their color. The color of a pixel is represented by a fixed

number of bits. The more bits the more subtle variations of colors can be reproduced. This is

called the color depth (CD) of the video.

An example video can have a duration (T) of 1 hour (3600sec), a frame size of 640 x 480 (W x H) at

a color depth of 24bits and a frame rate of 25fps. This example video has the following properties:

● pixels per frame = 640 * 480 = 307,200


● bits per frame = 307,200 * 24 = 7,372,800 = 7.37Mbits
● bit rate (BR) = 7.37 * 25 = 184.25Mbits / sec
● video size (VS) = 184Mbits / sec * 3600sec = 662,400Mbits = 82,800Mbytes =
82.8Gbytes

The advantages of digital representation for video are many. It permits

● Storing video on digital devices or in memory, ready to be processed (noise removal,


cut and paste, and so on) and integrated into various multimedia applications
● Direct access, which makes nonlinear video editing simple Repeated recording
without degradation of image quality
● Ease of encryption and better tolerance to channel noise
Digital Video Standards

Chroma Subsampling

● Since humans see color with much less spatial resolution than black and white, it

makes sense to decimate the chrominance signal.

● Chroma subsampling is a type of compression that reduces the color information


in a signal in favor of luminance data. This reduces bandwidth without
significantly affecting picture quality.
● A video signal is split into two different aspects: luminance information and color
information.
● Luminance, or luma for short, defines most of the picture since contrast is what
forms the shapes that you see on the screen. For example, a black and white
image will not look less detailed than a color picture.
● Color information, chrominance, or simply chroma is important as well, but has
less visual impact.
● What chroma subsampling does is reduce the amount of color information in the
signal to allow more luminance data instead.
● This allows you to maintain picture clarity while effectively reducing the file size
up to 50%.
● In the YUV format, luma is only 1/3rd of the signal, so reducing the amount of
chroma data helps a lot. Because of bandwidth limitations from internet speeds
and HDMI, this makes for much more efficient use of current systems.

4:4:4 VS 4:2:2 VS 4:2:0

● The first number (in this case 4), refers to the size of the sample. The two following
numbers both refer to chroma. They are both relative to the first number and define
the horizontal and vertical sampling respectively.
● A signal with chroma 4:4:4 has no compression (so it is not subsampled) and
transports both luminance and color data entirely.
● In a four by two array of pixels, 4:2:2 has half the chroma of 4:4:4, and 4:2:0 has a
quarter of the color information available.
● The 4:2:2 signal will have half the sampling rate horizontally, but will maintain full
sampling vertically.
● 4:2:0, on the other hand, will only sample colors out of half the pixels on the first
row and ignores the second row of the sample completely.

Scheme 4:2:0, along with others, is commonly used in JPEG and MPEG.

High Definition TV (HDTV)

● HDTV, in full high-definition television, a digital broadcasting standard

that offers picture and audio superior to that of traditional

standard-definition television (SDTV).

● HDTV—broadcast by cable or satellite or over the ultrahigh frequency

(UHF) portion of public airwaves at a bandwidth of 6 megahertz

(MHz)—offers video resolutions as high as 1,920 by 1,080 pixels (1,920

columns by 1,080 rows), many times greater than that of SDTV.


● The lowest high-resolution standard is 720p, or 720 progressive scan,

for resolutions of 1,280 by 720 in which all the rows are refreshed

together in each display cycle (typically 50 or 60 Hz, depending on

country, though televisions with faster display cycles have been

introduced).

● The next higher resolution is 1080i, or 1080 interlaced scan, for

resolutions of 1,920 by 1,080 in which only alternate rows are refreshed

in each cycle.

● While 720p gives slightly better images than 1080i for scenes with a

great deal of motion, 1080i gives slightly greater details, which result in

crisper static images. Finally, 1080p combines the progressive scan of

720p with the greater pixel count of 1080i.

● HDTV’s picture mimics the “wide-screen” shape of motion pictures,

with a rectangular aspect ratio of 16:9; SDTV typically appears in the

nearly square 4:3 aspect ratio.

● Because it is digital, HD allows multicasting, whereby a single television

station may broadcast different programs on several channels

simultaneously.

● HDTV is capable of broadcasting audio in 5.1-channel “surround

sound,” which is more nuanced than conventional stereo.


The services provided will include

● Standard Definition TV (SDTV)— the current NTSC TV or higher


● Enhanced Definition TV (EDTV) — 480 active lines or higher — the third and
fourth rows
● High Definition TV (HDTV)— 720 active lines or higher. So far, the popular
choices are 720P (720 lines, progressive, 30 fps) and 10801 (1,080 lines,
interlaced, 30 fps or 60 fields per second). The latter provides slightly better
picture quality but requires much higher bandwidth.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy