Mobile Computers
Mobile Computers
The first mobile computers were heavy and ran from mains power. The 50lb IBM 5100 was an
early example. Later portables such as the Osborne 1 and Compaq Portable were considerably
lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed
this requirement by incorporating batteries – and with the continued miniaturization of computing
resources and advancements in portable battery life, portable computers grew in popularity in
the 2000s. The same developments allowed manufacturers to integrate computing resources
into cellular mobile phones by the early 2000s.
These smartphones and tablets run on a variety of operating systems and recently became the
dominant computing device on the market. These are powered by System on a Chip (SoCs),
which are complete computers on a microchip the size of a coin.
Types
Computers can be classified in a number of different ways, including:
By architecture
Analog computer
Digital computer
Hybrid computer
Harvard architecture
Mainframe computer
Server
Rackmount server
Blade server
Tower server
Personal computer
Workstation
Home computer
Desktop computer
Tower desktop
Slimline desktop
Multimedia computer (non-linear editing system computers, video editing PCs and the like)
Gaming computer
All-in-one PC
Home theater PC
Keyboard computer
Portable computer
Thin client
Internet appliance
Laptop
Gaming laptop
Rugged laptop
2-in-1 PC
Ultrabook
Chromebook
Subnotebook
Netbook
Mobile computers:
Tablet computer
Smartphone
Ultra-mobile PC
Pocket PC
Palmtop PC
Handheld PC
Wearable computer
Smartwatch
Smartglasses
Single-board computer
Plug computer
Stick PC
Computer-on-module
System on module
System in a package
Microcontroller
Hardware
The term hardware covers all of those parts of a computer that are tangible physical objects.
Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays,
power supplies, cables, keyboards, printers and "mice" input devices are all hardware.
Input devices
When unprocessed data is sent to the computer with the help of input devices, the data is
processed and sent to output devices. The input devices may be hand-operated or automated.
The act of processing is mainly regulated by the CPU. Some examples of input devices are:
Computer keyboard
Digital camera
Digital video
Graphics tablet
Image scanner
Joystick
Microphone
Mouse
Overlay keyboard
Real-time clock
Trackball
Touchscreen
Output devices
The means through which computer gives output are known as output devices. Some examples
of output devices are:
Computer monitor
Printer
PC speaker
Projector
Sound card
Video card
Control unit
The control unit (often called a control system or central controller) manages the computer's
various components; it reads and interprets (decodes) the program instructions, transforming
them into control signals that activate other parts of the computer. Control systems in advanced
computers may change the order of execution of some instructions to improve performance.
A key component common to all CPUs is the program counter, a special memory cell (a
register) that keeps track of which location in memory the next instruction is to be read from.The
control system's function is as follows—note that this is a simplified description, and some of
these steps may be performed concurrently or in a different order depending on the type of
CPU:
Read the code for the next instruction from the cell indicated by the program counter.
Decode the numerical code for the instruction into a set of commands or signals for each of the
other systems.
Read whatever data the instruction requires from cells in memory (or perhaps from an input
device). The location of this required data is typically stored within the instruction code.
If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to
perform the requested operation.
Write the result from the ALU back to a memory location or to a register or perhaps an output
device.
Jump back to step (1).Since the program counter is (conceptually) just another set of memory
cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter
would cause the next instruction to be read from a place 100 locations further down the
program. Instructions that modify the program counter are often known as "jumps" and allow for
loops (instructions that are repeated by the computer) and often conditional instruction
execution (both examples of control flow).
The sequence of operations that the control unit goes through to process an instruction is in
itself like a short computer program, and indeed, in some more complex CPU designs, there is
another yet smaller computer called a microsequencer, which runs a microcode program that
causes all of these events to happen.
Superscalar computers may contain multiple ALUs, allowing them to process several
instructions simultaneously. Graphics processors and computers with SIMD and MIMD features
often contain ALUs that can perform arithmetic on vectors and matrices.
Memory
A computer's memory can be viewed as a list of cells into which numbers can be placed or
read. Each cell has a numbered "address" and can store a single number. The computer can be
instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in
cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information
stored in memory may represent practically anything. Letters, numbers, even computer
instructions can be placed into memory with equal ease. Since the CPU does not differentiate
between different types of information, it is the software's responsibility to give significance to
what the memory sees as nothing but a series of numbers.
In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight
bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255
or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or
eight). When negative numbers are required, they are usually stored in two's complement notation. Other
arrangements are possible, but are usually not seen outside of specialized applications or historical
contexts. A computer can store any kind of information in memory if it can be represented numerically.
Modern computers have billions or even trillions of bytes of memory.
The CPU contains a special set of memory cells called registers that can be read and written to
much more rapidly than the main memory area. There are typically between two and one
hundred registers depending on the type of CPU. Registers are used for the most frequently
needed data items to avoid having to access main memory every time data is needed. As data
is constantly being worked on, reducing the need to access main memory (which is often slow
compared to the ALU and control units) greatly increases the computer's speed.
Input/output (I/O)
I/O is the means by which a computer exchanges information with the outside world. Devices
that provide input or output to the computer are called peripherals. On a typical personal
computer, peripherals include input devices like the keyboard and mouse, and output devices
such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve
as both input and output devices. Computer networking is another form of I/O.
I/O devices are often complex computers in their own right, with their own CPU and memory. A
graphics processing unit might contain fifty or more tiny computers that perform the calculations
necessary to display 3D graphics. Modern desktop computers contain many smaller computers
that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own
computer circuitry.
Multitasking
While a computer may be viewed as running one gigantic program stored in its main memory, in
some systems it is necessary to give the appearance of running several programs
simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between
running each program in turn. One means by which this is done is with a special signal called an
interrupt, which can periodically cause the computer to stop executing instructions where it was
and do something else instead. By remembering where it was executing prior to the interrupt,
the computer can return to that task later. If several programs are running "at the same time".
then the interrupt generator might be causing several hundred interrupts per second, causing a
program switch each time. Since modern computers typically execute instructions several
orders of magnitude faster than human perception, it may appear that many programs are
running at the same time even though only one is ever executing in any given instant. This
method of multitasking is sometimes termed "time-sharing" since each program is allocated a
"slice" of time in turn.Before the era of inexpensive computers, the principal use for multitasking
was to allow many people to share the same computer. Seemingly, multitasking would cause a
computer that is switching between several programs to run more slowly, in direct proportion to
the number of programs it is running, but most programs spend much of their time waiting for
slow input/output devices to complete their tasks. If a program is waiting for the user to click on
the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is
waiting for has occurred. This frees up time for other programs to execute so that many
programs may be run simultaneously without unacceptable speed loss.
Multiprocessing
Some computers are designed to distribute their work across several CPUs in a multiprocessing
configuration, a technique once employed only in large and powerful machines such as
supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple
CPUs on a single integrated circuit) personal and laptop computers are now widely available,
and are being increasingly used in lower-end markets as a result.
Supercomputers in particular often have highly unique architectures that differ significantly from
the basic stored-program architecture and from general purpose computers. They often feature
thousands of CPUs, customized high-speed interconnects, and specialized computing
hardware. Such designs tend to be useful only for specialized tasks due to the large scale of
program organization required to successfully utilize most of the available resources at once.
Supercomputers usually see usage in large-scale simulation, graphics rendering, and
cryptography applications, as well as with other so-called "embarrassingly parallel" tasks.