Physics For Everyday Life
Physics For Everyday Life
UNIT-1
MECHANICAL OBJECTS
1.1.BOUNCING BALL
The motion of the ball can be divided into stages, each with its own direction
and velocity. As the ball bounces, it experiences damping, which reduces its
amplitude and eventually brings it to a stop due to friction forces like air resistance.
The ball's motion is not simple harmonic because its acceleration is not proportional
to its displacement from an equilibrium position.
The ball's acceleration is always downwards due to gravity. These stages can
be shown visually using three graphs: displacement, velocity, and acceleration vs.
time.
The displacement of the ball at 50 seconds can be found by using the area
under the graph, which is equal to the displacement. The area of the triangle can be
found using the formula: Area = (1/2) × base × height. The velocity of the ball before
it hits the ground from a height of three metres can be found by using the
conservation of energy. We equate the potential energy and the kinetic energy, and
rearrange with respect to velocity: v = √2gh, where g is the acceleration due to
gravity and h is the height of the ball.
Geometric sequence
A geometric sequence is a sequence where each term is related the previous
term by a common ratio, denoted by r. The nth term of a geometric sequence is given
by an = ar^(n-1), where a is the first term of the sequence. The sum of n terms of a
geometric sequence is given by the formula Sn = a(1-r^n)/(1-r). For an infinite
geometric sequence with a common ratio between 0 and 1, the sum of an infinite
number of terms can be calculated using the formula S∞ = a/(1-r). This is because
as n approaches infinity, r^n approaches zero, making the expression (1-r^n)/(1-r)
approach 1/(1-r).
However, this distance includes both the upward and downward travel of the
ball. To find the total distance traveled, we need to multiply this value by 2, giving
us:
For Solution B, we are asked to find the distance of travel if the ball bounces
infinitely, without losing any energy. In this case, we can use the formula for the
sum of an infinite geometric sequence:
S∞ = a/(1-k)
Again, this distance includes both the upward and downward travel of the ball,
so the total distance traveled would be:
Therefore, the total distance traveled by the ball in this ideal scenario would
be 19.354 metres.
Suppose, I'm dropping a rubber ball on the floor and it bounces back up. I'm
trying to understand this by considering the forces, and not using momentum,
collision, or energy.
Eventually, the ball would come to rest, due to dissipation of energy, and when
that happens, the normal reaction of the floor on the ball would be equal to the weight
of the ball, due to electrostatic force.
1.2. SPRING SCALE
The spring scale is designed to measure the overall weight of an object. Many
are confused in saying that the spring scale measures mass, but it does not.
These devices work based on Hooke's Law, which states that the force or
weight that extends a spring is directly related to the distance that the spring is
extended from its resting position. The spring scale converts this extension to
measuring weight using an analog or digital gauge attached to the device.
Image of Handheld Spring Scale
Not wanting to be surprised by the price when you check out, you search for
a produce scale. You find one hanging from the ceiling not too far away, and drop
your broccoli in the basket.
The scale bounces around for a moment and the dial above the basket finally
settles on a weight of just under one pound. Congratulations! Not only have you
found a tasty and nutritious side dish, you have also just used a spring scale!
Notice that we said a spring scale measures the weight of an object, not its
mass. In everyday speech, we use mass and weight interchangeably (and we'll
discuss why a little bit later).
In science however, these terms have very different meanings. It is important
to recognize these differences in order to properly understand how a spring scale
works. Now, let's look at the difference between mass and weight.
Function:
What is a spring scale used for?
Most experiences with the spring scale in daily life determine the weight of
an item or items - the device functions when an object is attached to the hook at the
bottom of the device. The weight of the object causes the spring to be extended. The
distance of the extension is directly translated into the weight of the product and is
presented on an easy-to-read dial or screen. The devices function straightforwardly
and are used in a multitude of ways. Other common uses of the spring scale are
shown below. Force
Working :
The spring scale is a device used to measure the weight of an object. By
placing an item on the hook or pan at the bottom of a spring scale, force is applied
to a spring inside the device. As this spring stretches or moves, the distance that it
travels is translated into a measurement of weight shown on a dial or screen. The
spring scale operates on a concept known as Hooke's Law. This law of
physics implies that the extension of a spring is directly proportional to the force
applied to it. In scientific applications, this force is measured in newtons, as that is
the standard of measurement in working with force applications.
The spring scale was invented in or around 1770 to replace the balance scale.
The balance scale required counterweights to produce an accurate measurement, and
the industrial revolution required something more precise. British balance-maker
Richard Salter is credited with the invention of the revolutionary device. His
invention has been used commonly in domestic and commercial environments, and
it continues to be widely used today.
Spring is a tool used daily by many of us and their inertia are frequently
neglected by assuming it as massless. It’s an extremely casual activity that a Spring
when strained, undergoes displacement when it is compacted it gets compressed and
when it is free it comes to its equilibrium position. This fact tells us that spring exerts
an equal as well as an opposite force on a body which compresses or stretches it.
F = k(x – x0)
Where,
the spring force is F,
the equilibrium position is xo
the displacement of the spring from its position at equilibrium is x,
the spring constant is k.
The negative sign tells that the visualized spring force is a restoring force and acts
in the opposite direction.
1.3.ROCKET
• Rocket Propulsion
Rocket propulsion is defined as
The force that is used by the rocket to take off from the ground and into the
atmosphere.
The principle on which rocket propulsion works is based on Newton’s third law of
motion. Here, the fuel is forcibly ejected from the exit such that an equal and
opposite reaction occurs.
• Rocket Propulsion Diagram
In the below diagram, there is a simplified diagram of liquid-fuel rockets and solid-
fuel rockets.
• An oxidizer
• Pumps to carry the fuel and the oxidizer
• Ion
Acceleration of Rocket
The acceleration of the rocket is given as:
a=ve/m Δm/Δt−g
Where,
• a is the acceleration of the rocket
• ve is the exhaust velocity
1.4. BICYCLE
Bicycles working
The science behind this ‘simple’ machine.
Bicycles turn energy created by our bodies into kinetic energy. Kinetic
energy is “a property of a moving object or particle and depends not only on its
motion but also on its mass” (Encyclopedia Britannica). If work, which transfers
energy, is done on an object by applying a net force, the object speeds up and thereby
gains kinetic energy. A bicycle can convert up to 90 percent of a person’s energy
and movement into kinetic energy. This energy is then used to move the bike. The
rider’s balance and momentum help keep the bike stable while traveling along a
path.
The wheels ultimately support your entire weight but in a very interesting
way. If bicycle wheels were completely solid, they’d squash down as you sat on the
seat and push back up to support you. However, the wheels of most bikes are formed
by using a strong hub with a thin rim and several spokes. Bicycles have spoked
wheels to make them both strong and lightweight and to lessen drag.
So come along for the ride and delve into the amazing physics of roller
coasters and discover the potential and kinetic energy at work.
Let’s understand the physics of roller coasters and their work with the help of
the illustration which is given below.
Roller coasters are unique in that they don't have an engine or power source
of their own. Instead, they rely on the initial energy provided by the lift hill to get
the ride started. This initial energy is stored as potential energy due to the height of
the coaster, and as it descends and picks up speed, it is converted into kinetic energy.
The forces of inertia, gravity and centripetal force all play a role in the
experience of a roller coaster. Inertia determines how a body will continue moving
in a straight line unless acted upon by an external force. Gravitational force pulls the
coaster and its riders downward, while centripetal force is responsible for the circular
motion of the coaster as it goes through turns.
The centripetal force keeps the coaster car moving in a circular path as it goes
through turns. This force acts inward, towards the centre of the turn, and is equal to
the mass of the car multiplied by its acceleration. Friction and air resistance also play
a role, as they can slow down the coaster car and affect its speed.
Throughout the ride, the law of conservation of energy states that the total
energy of the system must remain constant. This means that as the potential energy
of the coaster decreases due to its height, the kinetic energy increases to compensate,
giving the coaster its speed and thrill.
PE = mgh
where:
PE = potential energy
Theoretically, the constant conversion of energy means that the roller coaster
should never stop, but in reality, this is not the case. The frictional force generated
by the wheels rubbing against the track, as well as air resistance and the rattling noise
produced by the ride, all consume some of the initial potential energy stored by
lifting the coaster to the top of the hill. As a result, the cars eventually run out of
energy and come to a stop. To account for this, the loops at the end of the ride are
made smaller, allowing the cars to slow down gradually and come to a smooth stop.
On a roller coaster, when it takes a tight turn, riders feel like they are being
pushed away from the loop. This is due to inertia. Even when you are in an upside-
down loop, the force of the roller coaster's acceleration during the turn is greater than
the pull of gravity. This acceleration force is what keeps you in your seat and
prevents you from falling out while you're upside down.
It's important to mention that the design of the loops in a roller coaster must
not be circular, but elliptical, as circular loops would generate an excessive
centripetal force that would make the ride too intense and uncomfortable for riders.
Almost 30 years earlier, in 1942, the German V2 rocket was the first to reach
the official boundary of space. A few years later, in 1947, the first animals were sent
into space - these were fruit flies! Scientists tested the effects of space travel on these
fruit flies because the effects on their bodies could tell them a lot about the effects
that would likely occur with other animals. A few years later, in 1949, the first
monkey (named 'Albert II') was sent into space
• Manned Spaceflight
Interplanetary space travel is the next frontier, involving missions to other planets
within our solar system. While still in the early stages, projects like NASA’s Artemis
program aim to return humans to the Moon and eventually send astronauts to Mars,
expanding our presence in the cosmos.
SKILL - PHYSICS FOR EVERYDAY LIFE
UNIT :2
OPTICAL INSTRUMENTS AND LASER
2.1.VISION CORRECTIVE LENSES:
➢ Correcting Vision Using Lenses
In this explainer, we will learn how to use different kinds of lenses to correct
human vision.
When our eyes are working properly, we see objects around us clearly.
The objects might be nearby, such as a coin within arm’s length, as shown
below.
The objects might also be far away. Consider the diagram below, which shows
how light rays from an increasingly distant mountain reach our eyes.
This diagram shows that the farther from the mountain the eye is, the smaller
the angle between the rays from the base and the top is, and the more nearly parallel
these rays are.
At a great distance, this is approximated by the following diagram.
Objects look clear (not blurry) to us when our eyes are able to focus the light
coming from them. Notice that the eye does not need to bend light rays from faraway
objects (like the mountain) as much as it needs to bend light coming from objects
nearby (like the coin) in order to see them clearl
In a healthy eye, a part of the eye called the lens focuses incoming light on the
back of the eye, as shown below.
At the back of the eye is a part called the retina. When light is brought to a
focus on the retina, as shown above, we see a clear, sharp image.
So far we have considered only healthy vision, where the lens is able to focus
incoming light on the retina at the back of the eye.
Proper vision depends on the shape of the eyeball. If the eyeball shape
becomes irregular, the lens may no longer be able to focus light on the retina.
➢ NEARSIGHTED EYE :
The following diagram shows a healthy eye (on the left) alongside
a misshapen eye.
Notice that the light entering the misshapen eye comes to a focus before it
reaches the retina. As the diagram shows, once this light reaches the retina, it will
not be in focus. The result is a blurry, distorted image.
An eye that focuses light to a point before the light reaches the retina is said
to be nearsighted. This name is given because a nearsighted eye can bring into focus
objects that are nearby, but not objects that are far away.
Nearsighted eyes focus some incoming light too much, bending it so the rays
meet before they reach the retina at the back of the eye. Other eyes, however, focus
incoming light too little so that the rays of light do not come to a focus at all inside
the eye.
➢ FARSIGHTED EYE:
The narrower eye is said to be farsighted. Farsighted eyes are able to produce
clear images of faraway objects, while objects nearby look fuzzy and indistinct.
Nearsighted eyes and farsighted eyes both suffer from limits on how much the
eye’s lens can focus light.
2) CONVEX LENS
One type makes incoming light rays spread apart, while the other type makes
those rays focus to a point.
A lens that spreads out incoming light is called a diverging or concave lens.
An example of a concave lens is shown below.
By diverging incoming light, a concave lens can correct a particular vision
problem.
Recall that nearsighted eyes focus light too much, making light rays from
distant objects focus to a point before they reach the retina.
Putting a concave lens in front of a nearsighted eye can correct the eye’s vision
problem. It is now able to focus light rays from faraway objects onto the retina,
producing clear images.
While concave lenses are one type of lens we might use for eyeglasses, the
second common type are called convex lenses.
Convex lenses can correct the vision of a farsighted eye. Recall that a
farsighted eye is unable to focus light from nearby objects on the retina.
2.2.COLOUR PHOTOGRAPHY
Today we take colour photography for granted. Taking pictures in full, natural
colour is so easy that we don’t pause to consider how it all came about. Yet the
search for a cheap and simple process of colour photography was a long and difficult
quest.
This story explores the different approaches early inventors and entrepreneurs
took in the race to develop a successful colour photographic process, from hand-
colouring
COLOUR FIRST ADDED TO PHOTOGRAPHS:
In 1839, when photographs were seen for the very first time, they were greeted
with a sense of wonder. However, this amazement was soon mixed
with disappointment.
People didn’t understand how a process that could record all aspects of a scene
with such exquisite detail could fail so dismally to record its colours. The search
immediately began for a means of capturing accurately not only the form but also
the colours of nature.
While scientists, photographers, businessmen and experimenters laboured, the
public became impatient. Photographers, eager to give their customers what they
wanted, soon took the matter, literally, into their own hands and began to add colour
to their monochrome images. As the writer of A Guide to Painting Photographic
Portraits noted in 1851:
When the photographer has succeeded in obtaining a good likeness, it passes
into the artist’s hands, who, with skill and colour, give to it a life-like and natural
appearance.
Several different processes and materials were used for hand-colouring, which
proved to be a cheaper, simpler alternative to early colour processes. It provided
studio employment for miniature painters who had initially felt threatened by the
emergence of photography.
In skilled hands, effects of great subtlety and beauty could be achieved.
However, even at its very best, hand-colouring remained an unsatisfactory means of
recording colour; it could not reproduce the colours of nature exactly.
Photographs could already capture light and shade. What was required was a
process that could capture colour in the same way.
THE BIRTH OF THE THREE-COLOUR PROCESS:
Before colour could be reproduced, the nature of light—and how we
perceive colour—had to be clearly understood.
The scientific investigation of colour began in the 17th century. In 1666, Sir
Isaac Newton split sunlight with a prism to show that it was actually a combination
of the seven colours of the spectrum.
Nearly 200 years later, in 1861, a young Scottish physicist, James Clerk
Maxwell, conducted an experiment to show that all colours can be made by an
appropriate mixture of red, green and blue light.
Maxwell made three lantern slides of a tartan ribbon through red, green and
blue filters. Using three separate magic lanterns—each equipped with a filter of the
same colour the images had been made with—he then projected them onto a
screen.
When the three images were superimposed together on the screen, they
combined to make a full-colour image which was a recognisable reproduction of
the original.
EARLY EXPERIMENTS IN COLOUR PHOTOGRAPHY:
While the fundamental theory may have been understood, a practical method
of colour photography remained elusive.
In 1891 Gabriel Lippmann, a professor of physics at the Sorbonne,
demonstrated a colour process which was based on the phenomenon of light
interference—the interaction of light waves that produces the brilliant colours you
see in soap bubbles. This process won Lippmann a Nobel Prize in 1908 and was
marketed commercially for a short time around the turn of the 19th century.
Not long after Maxwell’s 1861 demonstration, a French physicist, Louis
Ducos du Hauron, announced a method for creating colour photographs by
combining coloured pigments instead of light.
Three black-and-white negatives, taken through red, green and blue filters,
were used to make three separately dyed images which combined to give a coloured
photograph. This method forms the basis of today’s colour processes.
While this work was scientifically important, it was of limited practical value
at first. Exposure times were long, and photographic materials sensitive to the whole
range of the colour spectrum were not yet available.
Color models explain how colors work, interact and how we replicate color.
Additive color starts with black and adds red, green and blue light to
produce the visible spectrum of colors. As more color is added, the result is
lighter. When all three colors are combined equally, the result is white light.
An example of how applying each light channel, red, green and blue, to a
full-color photograph alters its color appearance. The RGB model is primarily used
for screen displays.
2) SUBTRACTIVE COLOR MODELS:
There are a lot of different options for protecting your eyes and polarized
lenses are just one possibility. Just like protecting your skin if you’re spending hours
in the sun, your eyes need protection as well.
By coating polarized lenses with a special chemical, they block some of that
light as it passes through them. It acts as a filter for what’s being reflected directly
into your eyes.
With polarized lenses, the filter is vertical, so only some of the light can pass
through the openings. Because glare is typically horizontal light, polarized lenses
block this light and only allow vertical light. With the horizontal light blocked by
polarized lenses, this helps eliminate glare from shining directly into your eyes.
However, because the polarized coating also darkens the lens, polarized
lenses aren’t available for regular reading glasses.
Polarized lenses can make it difficult to see LCD screens. If it’s important to
be able to see a dashboard or screen for safety or convenience reasons, polarized
lenses may not be the best option for you.
Plus, they can also react negatively to certain tints on windshields, which means
they aren’t always the best choice for driving.
Be careful about claims about the benefits of wearing polarized or tinted lenses
at night. Polarized lenses are sometimes suitable for driving during the day, but
wearing them at night can be dangerous.
The darkened lens makes it harder to see in low-light situations, which can be
made worse if you already have trouble seeing at night.
If you’re not sure whether you should try polarized lenses, try talking to an
eye doctor about which type of protective sunglasses are best for you and your eyes.
When illuminated with a laser, this pattern brings the object to life, making it
appear as a realistic 3D image. The key to holography is capturing the interference
pattern between a reference beam and an object beam.
PROCESS OF HOLOGRAPH:
The process of holography involves several steps to create a 3D image. First,
a coherent light source, like a laser, illuminates the object.
The light splits into two beams: the object beam interacting with the object
and carrying its information and the reference beam for comparison. These beams
combine, creating an interference pattern. This pattern is recorded onto a special
material, like a holographic plate.
Beam splitters are also used to split the laser beam into the object and
reference beams. As for lasers, helium-neon (He-Ne) lasers are commonly employed
in holography due to their coherence and ability to emit a stable beam of red light.
Solid-state lasers, such as diode lasers, are also used for their versatility and
compactness. The choice of mirror and laser depends on the specific requirements
of the holographic experiment or application.
TYPES OF HOLOGRAPHY:
There are different types of holography that offer unique approaches to
creating and displaying 3D images.
APPLICATIONS OF HOLOGRAPHY:
Holography finds applications in various fields, offering unique ways to
enhance our experiences and advance different industries. Here are five
applications of holography:
CONCLUSION:
In conclusion, holography is an amazing imaging technique that creates
realistic 3D visuals and offers an immersive experience. Its versatile applications
span entertainment, research, security, and data storage.
2.7.LASER:
HISTORY OF LASER:
Albert Einstein was the first person to speak about the LASER process. The
system was however completely developed in 1960 by Theodore H. Maiman. The
LASER was based primarily on the concept given by Charles Hard Townes and
Arthur Leonard Schawlow.
LASER TYPES:
Below is a list of LASER types, depending on their wavelengths and applications.
• Gas LASER
• Semiconductor LASER
• Chemical LASER
• Liquid or Dye LASER
• Excimer LASER
PROPERTIES OF LASER:
We may classify the laser beam characteristics into four main groups, such as
• Superior Coherence
• Superior Monochromatism
• High Output
• Superior Directivity
Using these laser properties, they are used in various fields, such as optical
communication and protection.
APPLICATIONS OF LASER:
• Lasers are expensive, and therefore, those patients who need laser-based
treatment options are much expenditure.
• Lasers are expensive to maintain, and therefore cause high costs to doctors
and hospital administrators.
• Lasers elevate the convolution and the treatment period based on laser
equipment.
UNIT-3 (PHYSICS FOR EVERYDAY LIFE )
3.1 MICROWAVE OVEN:
Microwave ovens work on the principle of conversion of
electromagnetic energy into thermal energy.
Electromagnetic (EM) energy refers to the radiation (waves)
comprising an electrical field and magnetic field oscillating perpendicular
to each other. When a polar molecule, i.e., a molecule containing opposite
charges, falls in the path of these EM radiations, it oscillates to align with
them.
This causes the energy to be lost from the dipole by molecular
friction and collision, resulting in heating. The water molecules present
inside our food products go under a similar phenomenon when they come
in contact with microwave radiations, heating the food from inside out.
Microwaves are electromagnetic radiations with frequencies
between 300MHz (0.3 GHz) and 300 GHz, and the corresponding
wavelengths ranging from 0.9m to .0009m, respectively. In most of the
ovens, the microwave used is of 2.24GHz frequency (i.e., wavelength =
12.2cm).
These dimensions allow microwaves to penetrate deep inside the
food and cook it from inside, while the temperature of the air present
around the food remains constant as air is nonpolar.
There is a common misconception that microwaves in a microwave
oven excite a natural resonance in water. The frequency of a microwave
oven is well below any natural resonance in an isolated water molecule,
and in liquid water, those resonances are so smeared out that they’re
barely noticeable anyway.
MAIN COMPONENTS OF MICROWAVE OVEN:
a)High Voltage Transformer: Unlike many other household
appliances, the microwave oven requires more power than the normal
voltage that the home’s electrical wiring carries. To accomplish this, a
step-up transformer with a high-voltage output is placed inside the oven.
The 240V supply is jumped to a few thousand volts, which is then fed to
the cavity magnetron.
WORKING MECHANISM:
The process of heating food in the microwave oven is fairly simple;
however, the mechanism involved in that process is somewhat atypical.
After the generation of microwaves at the magnetron, they are guided by
the waveguide towards the food inside the cavity. The microwaves
penetrate through the surface of the food and reach the water molecules
present inside it.
As the orientation of the electric field changes over time, the polar
molecules of water attempt to follow the field by changing their
orientation inside the material to line up along the field lines in an
energetically favourable configuration (namely, with the positive side
pointing in the same direction as the field lines).
As these molecules change direction rapidly (millions of times per
second at least), they gain energy, which increases the temperature of the
material. This process is called dielectric heating.
The microwave energy diminishes according to the inverse square
law, and therefore, the cavity chamber, where we place food, is designed
in such a way that it carries out the maximum efficiency of the heating
effect of microwaves. Furthermore, most of the microwave ovens come
with a door switch that does not allow the process to initiate until the door
is completely sealed.
❖ ADVANTAGES OF MICROWAVE OVEN:
▪ The volumetric heating process of microwaves is their
most prominent characteristic. In the conventional cooking
method, the heat must spread inwards from the surface of the
food item, whereas the spread of heat in the case of microwave
oven is done in a controlled manner with the help of the
microwaves.
▪ It’s a quick and convenient method of heating food and
leftovers.
▪ Since microwaves can only interact with polar
substances like water, they cannot affect the nutritional value of
those ingredients that are non-polar. Other conventional cooking
methods, however, may destroy some polar as well as non-polar
ingredients during the process.
▪ The user interface and micro-controller facilitate precise
control over the cooking temperature.
▪ The ease of the cooking process in a microwave oven
also results in easier cleaning of the equipment after use
❖ DISADVANTAGES OF MICROWAVE OVEN:
▪ It is important to take care of what kind of utensils are
being used in a microwave. A dish that is not microwave-safe
will set off a chemical reaction between the food and the
container.
▪ The cost of equipment is high in comparison to other
conventional cooking methods.
▪ Microwave leakage may lead to electromagnetic
interference with other electrical equipment present in the
surrounding vicinity. The pacemakers installed in some patients
are particularly vulnerable to such radiation leakage.
▪ Microwave radiation can heat body tissue the same way
it heats food. Exposure to high levels of microwaves can cause
a painful burn. In particular, the eyes and the testes are
vulnerable to microwave heating because there is relatively little
blood flow in them to carry away excess heat.
▪ Another disadvantage of microwaves is that they have
limited capacity and because of this, they are not the best option
for large families.
3.2.AIR CONDITIONERS :
Many people mistakenly think that their AC works by “creating”
cold air. They don’t. Instead, they work by removing the heat inside your
house and transferring it outdoors.
WORKING OF AIR CONDITIONING TO COOL YOUR
HOME :
Many homes in North America rely on split-system air conditioners,
often referred to as “central air.” Air conditioning systems include a
number of components and do more than just cool the air inside. They
also can control humidity, air quality and airflow within your home. So
before we answer the question of how do air conditioners work, it will be
helpful to know what makes up a typical system.
CENTRAL AIR :
The cooling process starts when the thermostat senses the air
temperature needs to be lowered and sends signals to the air conditioning
system components both inside and outside the home to start running. The
fan from the indoor unit pulls hot air from inside the house through return
air ducts. This air passes through filters where dust, lint and other airborne
particles are collected.
As you can see, asking the question “how do air conditioners work”
can lead to a very simple or very complicated explanation. It’s the same
with describing types of air conditioners.
3.3.BULB
How were electric lamps invented? In 1878, Thomas Alva Edison
began research into developing a practical incandescent lamp, and in
1879, the Electronic bulb was invented. Edison applied for a patent for
“Improvement in Electric Lights” on 14 October 1878.
EELECTRONIC BULB:
The electronic bulb is the simplest electrical lamp that was invented
for illumination more than a century ago. It was the small and
simplest light that brightened the dark space. The electronic bulb is
also known as an incandescent lamp, incandescent light globe or
incandescent light bulb. Bulb comes in different sizes and light output and
operates with a voltage range from 1.5 Volts to about 300 Volts. Now let
us study the parts and structure of the bulb in detail.
An electric light bulb or lamp that produces light by heating a
filament wire to a high temperature until it glows is known
as incandescent bulb. The incandescent bulb was invented by an
American inventor, named Thomas Alva Edison.
The incandescent bulb is an electric lamp that works on the
principle of incandescence that means it emits light by the heating of a
filament. The incandescent lamps come in different sizes with different
voltages and wattages.
It has nails in the base and contains two pieces of lead that connect
the lamp to the electric circuit.
3.4. FAN
ELECTRIC FAN:
Electric fans, which everyone of us has seen at homes is need of
summers when the atmospheric temperature goes above the comfort level
of human body.
Electric fan when rorates, blows away air around it towards the
corners of room and thus speeds up the evaporation process resulting in
the cooling of human body and room.
COMPONENTS OF ELECTRIC FAN:
AC vs DC:
➢ The advantages of a DC motor ceiling fan over an AC
motor ceiling fan:
3.5.TELEVISION
INTRODUCTION:
Television, or TV, is a system for sending moving pictures and
sound from one place to another. It is one of the most important and
popular forms of communication. TV programs provide news,
information, and entertainment to people all over the world.
HISTORY:
Inventors in Great Britain and the United States made the first
demonstrations of TV in the 1920s. The first working TV sets appeared in
the 1930s. In 1936 the British Broadcasting Corporation (BBC) started
the world’s first TV programming. The first commercial television
stations in the United States started broadcasting in 1941.
Many families bought their first TV set after World War II, in
the late 1940s and the 1950s. The first sets could show only black-and-
white pictures. Color TV and cable TV started in the 1950s. Digital TV
arrived in the 1990s.
WORKING PRINCIPLE OF TV:
TV begins with a video camera. The camera records the pictures
and sound of a TV program. It changes the pictures and sound into electric
signals. A TV set receives the signals and turns them back into pictures
and sound.
i. The transmitter produces a television signal from the audio and
video signals.
ii. This TV signal is broadcasted by an antenna as
an electromagnetic wave of an assigned frequency range or
channel for that station. A television antenna picks up all
broadcast signals that reach it.
iii. These signals produce electric current within the antenna inside
the TV.
iv. The tuner selects the desired broadcast signal.
v. Other parts of the receivers separate the audio signal and send it
to a speaker system.
vi. The video signal is divided into three signals into primary colors.
vii. Thus, we get to view scenes on television.
THE TV SIGNAL :
A standard TV camera changes the pictures into an electric
signal called the video signal. The video signal carries the pictures in the
form of tiny dots called pixels. The camera’s microphone changes the
sound into another electric signal, called the audio signal. The video and
audio signals together form the TV signal.
Digital TV, or DTV, is a newer way of handling TV signals.
A digital TV signal carries pictures and sound as a number code, like a
computer does. A digital signal can carry more information than a
standard signal can, which creates better pictures and sound. High
definition TV, or HDTV, is a high-quality form of digital TV.
A TV signal can reach a TV set in several ways. Local TV
stations use antennas to send, or broadcast, signals through the air
as radio waves. Cable TV stations send signals through underground
cables.
Satellites, or spacecraft, traveling high above Earth can send
signals to special antennas called satellite dishes. A signal can also come
from a VCR, DVD player, or DVR (digital video recorder) connected to
the TV set. VCRs, DVRs, and some DVD players can record a TV signal
coming into the TV and then play it back later.
DISPLAY :
A standard TV set turns the video signal into beams of tiny
particles called electrons. It shoots these beams at the back of the screen
through a picture tube. The beams “paint” the pixels on the screen in a
series of rows to form the picture. The TV set sends the audio signal to
loudspeakers, which reproduce the sound.
LCD and plasma TVs form the picture differently. They do not
use a picture tube and electron beams. Because they do not hold a picture
tube, LCD and plasma TVs are much thinner and lighter than standard
TVs. They can even hang on a wall.
LCD stands for liquid crystal display. Liquid crystal is a
substance that flows like a liquid but has some tiny solid parts, too. The
display sends light and electric current through the liquid crystal. The
electric current causes the solid parts to move around. They block or let
light through in a certain way to make the picture on the screen.
A plasma display has tiny colored lights containing a gas called
plasma. Electric current sent through the plasma causes it to give off light,
which makes the picture.
Advantages Disadvantages
❖ News ❖ Addiction
❖ Relaxation ❖ Time-Wasting
3.6.VACUUM CLEANER:
Imagine wanting to vacuum your carpets in the early years of the
20th century. You would have to call a door-to-door vacuuming service,
which would send a huge horse-drawn machine to your house.
Hoses would be fed through your windows, attached to the gasoline-
powered vacuum outside in the street. Not very convenient, right? And
when the first portable electric vacuum was invented in 1905, it weighed
92 pounds…also not very convenient!
Vacuums have undergone many modifications over the years, going
from simple carpet sweepers to high-powered electric suction machines.
The vacuum cleaner as we know it was invented by James Murray
Spangler in 1907. He used an old fan motor to create suction and a
pillowcase on a broom handle for the filter. He patented his ‘suction
sweeper,’ but soon after that, William H.
Hoover bought his patent and started the Hoover Company to
manufacture the vacuum cleaners. Hoover’s ten-day free trial and door-
to-door sales soon placed vacuum cleaners in homes all over the country.
Over the years Hoover added components (such as the ‘beater bar’) to
dislodge dirt in the carpet so the vacuum could suck it up.
Vacuum cleaners work because of Bernoulli’s Principle, which
states that as the speed of air increases, the pressure decreases. Air will
always flow from a high-pressure area to a low-pressure area, to try to
balance out the pressure.
A vacuum cleaner has an intake port where air enters and an exhaust
port where air exits. A fan inside the vacuum forces air toward the exhaust
port at a high speed, which lowers the pressure of the air inside, according
to Bernoulli’s Principle. T
his creates suction – the higher pressure air from outside the vacuum
rushes in through the intake port to replace the lower-pressure air. The
incoming air carries with it dirt and dust from your carpet.
This dirt is trapped in the filter bag, but the air passes right through
the bag and out the exhaust. When the bag is full of dirt, the air slows
down, increasing in pressure. This lowers the suction power of your
vacuum, which is why it won’t work as well when the bag is full.
Make a Vacuum Cleaner
A vacuum cleaner is able to suck dirt off carpet because high
pressure air from outside it flows toward low pressure air inside. In an
electric vacuum, a fan causes air inside the vacuum to move quickly,
which lowers the air pressure, causing suction. The higher-pressure air
from outside the vacuum is sucked in to replace the low-pressure air,
bringing dirt and dust with it to be caught in the filter bag.
THE 4 ESSENTIAL PARTS OF ANY VACUUM
4.1.SOLAR ENERGY
Solar energy is created by nuclear fusion that takes place in the sun. It is
necessary for life on Earth, and can be harvested for human uses such as electricity.
Solar energy is any type of energy generated by the sun.
Solar energy s created by nuclear fusion that takes place in the sun. Fusion
occurs when protons of hydrogen atoms violently collide in the sun’s core and
fuse to create a helium atom.
In stars that are about 1.3 times bigger than the sun, the CNO cycle drives the
creation of energy. The CNO cycle also converts hydrogen to helium, but relies on
carbon, nitrogen, and oxygen (C, N, and O) to do so. Currently, less than two percent
of the sun’s energy is created by the CNO cycle.
The energy, heat, and light from the sun flow away in the form of
electromagnetic radiation (EMR).
The sun also emits infrared radiation, whose waves are much lower-frequency. Most
heat from the sun arrives as infrared energy.
Solar Panels
Solar energy is any type of energy generated by the sun. Solar energy can be
harnessed directly or indirectly for human use. These solar panels, mounted on a
rooftop in Germany, harvest solar energy and convert it to electricity.
PP Chain Reaction
Solar energy originates as nuclear fusion taking place in the sun's superhot core.
The sun produces energy through the proton-proton (pp) chain reaction. In the pp
chain reaction, isotopes of hydrogen (protons) fuse to form helium. The incredibly
powerful process also generates particles known as neutrinos and positrons, as well
as high-frequency gamma rays.
A)Greenhouse Effect
The infrared, visible, and UV waves that reach Earth take part in a process of
warming the planet and making life possible—the so-called “greenhouse
effect.”
About 30 percent of the solar energy that reaches Earth is reflected back into
space. The rest is absorbed into Earth’s atmosphere. The radiation warms
Earth’s surface, and the surface radiates some of the energy back out in the
form of infrared waves. As they rise through the atmosphere, they are
intercepted by greenhouse gases, such as water vapor and carbon dioxide.
Greenhouse gases trap the heat that reflects back up into the atmosphere.
In this way, they act like the glass walls of a greenhouse. This greenhouse
effect keeps Earth warm enough to sustain life.
B)Photosynthesis
Almost all life on Earth relies on solar energy for food, either directly or
indirectly.
Producers rely directly on solar energy. They absorb sunlight and convert it
into nutrients through a process called photosynthesis. Producers, also called
autotrophs, include plants, algae, bacteria, and fungi. Autotrophs are the
foundation of the food web.
C)Fossil Fuels
Photosynthesis is also responsible for all of the fossil fuels on Earth. Scientists
estimate that about three billion years ago, the first autotrophs evolved in
aquatic settings. Sunlight allowed plant life to thrive and evolve. After the
autotrophs died, they decomposed and shifted deeper into the Earth,
sometimes thousands of meters. This process continued for millions of years.
Under intense pressure and high temperatures, these remains became what we
know as fossil fuels. Microorganisms became petroleum, natural gas, and
coal.
People have developed processes for extracting these fossil fuels and using
them for energy. However, fossil fuels are a nonrenewable resource. They take
millions of years to form.
4.2.SOLAR CONSTANT
A solar constant is a measurement of the solar electromagnetic radiation
available in a meter squared at Earth's distance from the sun. The solar constant is
used to quantify the rate at which energy is received upon a unit surface such as a
solar panel. In this context, the solar constant provides a total measurement of the
sun's radiant energy as it is absorbed at a given point.
Solar constants are used in various atmospheric and geological sciences.
Though called a constant, the solar constant is merely relatively constant. The
relative constant does vary by 0.2% in a cycle that peaks once every eleven years.
The first attempt at estimating the solar constant was made by Claude Pouillet in
1838 at 1.228 kW/m2. The constant is rated at a solar minimum of 1.361 kW/m 2 and
a solar maximum of 1.362.
The entire spectrum of electromagnetic radiation is included in the
measurement of a solar constant and not just that of visible light. The best direct
measurements of the solar constant are taken from satellites. The Stefan-Boltzman
constant can also be used as a means to calculate a solar constant. In this context, the
constant defines the power per unit area emitted by a black body as a function of its
thermodynamic temperature
The measure of the solar electromagnetic radiation in a meter squared at Earth's
distance from the sun is called a solar constant. To quantify the rate at the unit surface
of a solar panel in which the energy is received upon the solar constant is used. In
this case, the solar constant is absorbed at a given point and provides a total
measurement of the sun's radiant energy.
The solar constant which is denoted by the symbol GSC is a flux density which
is the measuring mean of solar electromagnetic radiation. It is the solar irradiance
per unit area. It is said to be measured on a surface perpendicular to the rays that is
one astronomical unit denoted by AU from the Sun which is roughly the distance
from the Sun to the planet.
The solar constant includes all types of solar radiation and not just visible light.
It is said to be measured by satellite as being 1.361 kilowatts per square meter which
is written as kW/m2 at solar minimum that is the time in the 11-year solar cycle when
the number of sunspots is minimal) and approximately 0.1% greater roughly 1.362
kW/m2 at solar maximum.
Solar Constant ValueThe time per unit of area on a theoretical surface that is
perpendicular to the rays of the Sun and at Earth’s mean distance from the Suns is
said to be the most accurate measurement that is measured from satellites where
atmospheric effects are absent.
The value of the constant is approximately said to be 1.366 kilowatts per square
metre. The “solar constant” is fairly constant, increasing by only 0.2 per cent at the
peak of each 11-year solar cycle. The sunspots usually block out the light and reduce
the emission by a few tenths of a percent but the bright spot, also known as the
plagues that are associated with the solar activity, is more extensive and longer-lived.
Moreover, as the Sun burns up its hydrogen presence, the solar constant increases by
about 10 percent every billion years.The solar constant is not a physical constant in
the modern CODATA scientific sense; unlike the Planck constant or the speed of
light which are absolutely constant in Physics. The solar constant is said to be an
average of a varying value.
In the past 400 years, it has varied even less than 0.2 per cent. That is we can
say that billions of years ago or so, it was significantly lower. This constant is said
to be used in the calculation of radiation pressure which helps in the calculation of a
force on a solar sail
THE DIMENSIONAL FORMULA FOR SOLAR CONSTANT
The solar constant is the incident ray of solar energy per unit area per second
on the earth surface.
Solar constant = Energy / (Unit area x Unit time)
= ML²T⁻² / (L²T)
= MT⁻³
5.5.VENKATARAMAN RAMAKRISHNAN:
BORN ON: 1952
BORN IN: Chidambaram in Cuddalore District, Tamil Nadu
CAREER: Structural Biologist
NATIONALITY: American
B) CAREER:
Venkataraman Ramakrishnan began his career as a postdoctoral fellow
with Peter Moore at Yale University, where he worked on ribosomes. After
completing this research, he applied to nearly 50 universities in the U.S. for a
faculty position. But he was unsuccessful. As a result of this, Venkataraman
continued to work on ribosomes from 1983 to 1995 in Brookhaven National
Laboratory.
In 1995, he got an offer from University of Utah to work as a
professor of Biochemistry. He worked here for almost four years and then moved
to England where he started working in Medical Research Council Laboratory of
Molecular Biology. Here, he began a detailed research on ribosomes.
In 1999, along with his fellow mates, he published a 5.5 angstrom
resolution structure of 30s subunit of ribosome. In the subsequent year,
Venkataraman submitted a complete structure of 30s subunit of ribosome and it
created a sensation in structural biology. Following this, he conducted several
studies on these cell organelles and its mechanism. Recently, he determined the
complete structure of ribosomes along with the tRNA and mRNa.
D)TIMELINE:
●1952: Venkataraman Ramakrishnan was born in a small district of
Tamil Nadu.
●1971: He obtained an undergraduate degree in Physics.
● 1976: Received a Ph.D. from Ohio University.
● 1983-1995: Continued his studies on ribosomes in Brookhaven
National Laboratory.
● 1995: Got an offer to work as a professor of Biochemistry in the
University of Utah.
● 1999: Published a 5.5 angstrom structure resolution structure of 3s
subunit of a ribosome.
●2007: Awarded the Louis-Jeantet Prize for his work in Medicine.
● 2008: Given the Heatley Medal of British Biochemistry Society.
●2009: Received Nobel Prize for his work on ribosomes.
●2010: Recipient of the Padma Vibhushan for his contributions to
Science.
5.6.SUBRAHMANYAN CHANDRASEKHAR:
On 19 October 1910, Indian American astrophysicist Subrahmanyan
Chandrasekhar was born in Lahore, British India. He received the Nobel Prize for
Physics in 1983 along with William A Fowler
ACTIVE RESEARCH ERA: 20th Century
DATE OF DEATH: 08/21/1995
FIELD OF STUDY: Astrophysics
AFFILIATED INSTITUTIONS OF WORK:
• Presidency College- B.Sc (w/ honors) in Physics
• Cambridge University- Ph.D
• Trinity College- Fellowship
• University of Chicago- Faculty
Subrahmanyan Chandrasekhar, popular as “Chandra”, was an Indo-
American scientist and astrophysicist who stayed in America during his profession.
He was one of the popular scientists of the 20th century. Subrahmanyan
Chandrasekhar contributions to physics, applied mathematics, and astrophysics are
exceptional. He shared the Nobel prize with William A. Fowler in 1983 for the
important discoveries on the developmental stages of massive stars.
He was famous for the invention of the Chandrashekhar limit, the theory of
Brownian motion, the theory of illumination and the polarisation of the sunlit sky,
the general theory of relativity, and relativistic astrophysics and the mathematical
theory of black holes. In January 2011, an exhibition of life and works was conducted
at Science City in Kolkata.
⚫ CHILDHOOD LIFE:
Subrahmanyan Chandrasekhar was born in Lahore, British India, on
19th October 1910. The family shifted from Lahore to Allahabad in 1916 and settled
in Madras finally in 1918. He had two elder sisters, three younger brothers, and four
younger sisters.
His father, Chandrashekhara Subrahmanya Iyer, was an officer in the Indian
Audits and Accounts Department. His mother, Sita, was a woman of high analytical
skills. C.V. Raman, the first Indian to be awarded the Nobel prize in science, was his
father’s younger brother.
⚫ MARRIED LIFE:
Subrahmanyan Chandrasekhar married Lalitha Doraiswamy in September
1936. She was a fellow student at Presidency College. They both got US citizenship
in the year 1953 and settled there.
⚫ EDUCATION :
Subrahmanyan Chandrasekhar completed his homeschooling with the help
of his parents until the age of 12. His father taught him physics and mathematics,
whereas his mother taught him Tamil.
He later joined Hindu High School in Triplicane, Madras, in 1922.
Afterwards, he was admitted to Presidency College, affiliated with the University of
Madras from 1925-to 1930, and secured a bachelor’s degree in B.Sc. (Hon.) in
physics.
After completing his graduation, he joined Born’s Institute of Gottingen. He
completed his final year of post-graduate studies at the Institute for Theoretical
Physics in Copenhagen. In 1933, he was granted a PhD at Cambridge with a
dissertation on rotating self-gravitating polytropes. Trinity College at Cambridge
granted the prize fellowship after obtaining his doctorate.
⚫ PROFESSION AND RESEARCH :
In December 1936, Subrahmanyan Chandrasekhar was appointed as an
assistant professor of Theoretical Astrophysics at Yerkes and stayed at the University
of Chicago. Later, in 1941, he was promoted to Associate Professor 1941. In 1953,
he was nominated as the Morton D.
Hull Distinguished Service Provider of Theoretical Astrophysics. In 1966,
NASA constructed the Laboratory for Astrophysics and Space Research (LASR),
and he seized one of the four corner offices on the second floor. During World War
II, i.e. in 1943, he also worked with the Ballistic Research Laboratory at Aberdeen
Proving Ground in Maryland.
He was declared a Fellow of the Royal Society of London and received the
society’s Royal Medal in 1963. He was also honoured with the US National Medal
of Science in 1967.
⚫ OTHER WORKS :
Subrahmanyan Chandrasekhar worked as an editor of “The Astrophysical
Journal” from 1952 to 1971. He also worked on a project dedicated to describing the
detailed geometric arguments by using the language and the methods of ordinary
calculus.
Subrahmanyan Chandrasekhar became a voluntary member of the
International Academy of Science. He published approximately ten books on
different topics of theoretical astrophysics. He guided over 50 students to their PhDs,
and many of them got Nobel Prizes, too.
⚫ SUBRAHMANYAN CHANDRASEKHAR’S CONTRIBUTION TO
ASTROPHYSICS :
Between 1929 and 1939, Subrahmanyan Chandrasekhar was deeply
interested in astrophysics. While travelling by ship in 1930 to start his PhD at
Cambridge, he calculated a number, which is now known as the Chandrashekhar
Limit, in his memory. Its value is 1.4 and identifies the fate of stars. He declared this
result in the Astrophysical Journal in 1931.
In 1930, scientists trusted all-stars would gradually fade to become white
dwarfs. Chandrashekhar discovered that a white dwarf appears only if its mass is
less than or equal to 1.4 times our sun’s mass. It is known that the inward pull of
gravity and the outward pressure of nuclear reactions are balanced usually. When the
star ends of normal existence, the outward push is weak, and the star shrinks. It
depends on its mass. As there is more mass, the inward pressure of gravity will be
stronger.
If the pulled inward mass is less than or equal to the Chandrashekar limit, the
star will be a white dwarf, whereas if the pulled inward mass is greater than the
Chandrashekar limit, the star would become a neutron star or black hole. It was
agreed that the ultimate fate of stars depends on their masses.
⚫ THE NOBEL PRIZE :
Subrahmanyan Chandrashekar was honoured with the Nobel Prize in physics
for his “theoretical studies of the physical processes of importance to the structure
and evolution of the stars”, shared with William Fowler.
⚫ LEGACY OF S CHANDRASEKHAR :
Chandrasekhar’s most notable work is on the astrophysical Chandrasekhar
limit. The limit gives the maximum mass of a white dwarf star, ~1.44 solar masses,
or equivalently, the minimum mass that must be exceeded for a star to collapse into
a neutron star or black hole (following a supernova). The limit was first calculated
by Chandrasekhar in 1930 during his maiden voyage from India to Cambridge,
England for his graduate studies.
On 19 October 2017, Google showed a Google Doodle in 28 countries
honouring Chandrasekhar’s 107th birthday and the Chandrasekhar limit.
⚫ CONCLUSION :
Subrahmanyan Chandrasekhar inventions on the origin and structure of stars
gain a major place in the world of science. His work in astrophysics is amazing,
and he always wanted to remain outside the mainstream of research. Throughout his
journey in life, he aimed to gain knowledge and understanding.