Cec367 Iiot Unit V
Cec367 Iiot Unit V
4×1038
• WirelessHART was created for Industrial IoT and features low latency, high reliability, a
focus on battery life longevity, and a medium bandwidth of 150 MBps.
• Low Power Wide Area Network(LPWAN) focuses more on extremely high link
distances of greater than a mile and high battery life (of over 10 years ideally), but with
lower throughput in the range of bits per section, designed for agricultural industrial,
medical, and smart city applications.
• Wi-Fi, Bluetooth, WLAN, and ZigBee are other popular wireless technologies that
operate in the 2.4 ~ 5 GHz range and have a focus on high data rates over short distances.
• Other standards such as LoRA (169 ~ 915 MHz) and SigFox (868 to 928 MHz) are used
in longer range and lower data rate applications.
• LTE CAT-M also offers higher bandwidth, high throughput, and low latency.
Once we’ve pinpointed our device's functionality and decided on the accompanying protocol, the
next step is to consider antenna directionality, form factor, and gain.
Antenna Directivity: Directivity expresses the concentration of a beam of radiation in a particular
direction. Omni-directional antennas are, therefore, somewhat evenly concentrated in all three
dimensions, while a directional antenna exhibits narrower radiation patterns. This is often
accomplished by combining multiple radiating elements.
Antenna Form Factor: Depending on our protocol, there are a variety of antennas we could
incorporate into IoT devices. Different antennas have different frequency bandwidths, which must
be taken into consideration.
• Whip and paddle antennas offer the advantage of modularity as they are not integrated
into the PCB of IoT devices, making a physical connection with the PCB over a coaxial
connector. They are commonly used in wireless connectivity for IoT applications like
ISM, LoRa, and LPWAN. Whip antennas, specifically quarter-wave whip antennas, are a
type of monopole antenna with a ground plane replacing one of the radiating elements.
Larger installations may include quarter-wavelength radials mounted perpendicular to the
antenna for optimal performance.
• Patch antennas are commonly employed in GPS-enabled IoT devices. They can be
designed for either right-handed circular polarization (RHCP) or left-handed circular
polarization (LHCP). Some patch antennas may exhibit only a single type of polarization,
such as linear, RHCP, or LHCP. Selecting a polarization that matches the transmission is
crucial for optimal performance. Patch antennas can also be designed for dual polarization
with reconfiguration using PIN diodes or RF MEMS devices.
• PCB antennas, composed of conductive traces on circuit boards, offer the advantage of
fitting into small spaces and can have higher gains than chip antennas. Different PCB
antenna topologies, including inverted-F, L, and folded monopole designs, are available.
The ground plane is crucial for PCB antenna performance, affecting bandwidth, radiation
efficiency, and radiation pattern. Despite occupying board space, PCB antennas are cost-
effective and provide design flexibility.
• Chip antennas are even more compact and well-suited for small IoT devices. They have
relatively low bandwidth and perform best when used with large ground planes and low-
frequency bands such as computers, satellite radios, and GPS devices. However,
integrating chip antennas into densely populated boards can pose challenges.
Selecting the right antenna design for IoT applications is crucial for optimal performance and
functionality in terms of wireless connectivity, GPS-enabled devices, space constraints, and design
flexibility.
When working on antenna designs for IoT and 5G applications, it is crucial to consider regulatory
standards in different regions worldwide, including the Radio Equipment Directive (RED),
Electromagnetic Compliance, FCC Class A and B Rules, and SAR requirements.
The key parameters to consider when selecting an antenna are:
• Antenna type
• Operating frequency band
• Field of View (FoV)
• Radiation pattern
• Antenna gain (total power radiated)
• Shape
Compact antennas with high gain are especially desirable. To support increased densities of
connected devices operating at higher data rates simultaneously. Addressing this challenge will
necessitate higher cell densities and broader utilization of multiple-input, multiple-output (MIMO)
antenna technologies, which are already utilized in preexisting networks.
MIMO involves an array of multiple transmitting and receiving antennas, commonly found in
current LTE networks as an 8 x 8 antenna array. Using spatial multiplexing, MIMO breaks down a
signal into encoded streams, which are transmitted simultaneously through different antennas in the
array. The transmitting and receiving devices are equipped with multiple antennas and employ
signal processing for encoding and decoding the multiplexed signals.
Today’s wireless technologies are at the heart of the smart revolution that’s changing the way we
live and work. It’s common for connected “things” to support multiple wireless standards, such as
cellular, Wi-Fi, and Bluetooth, while also integrating GPS for location and tracking.
Equipment designers are under pressure to cram multiple radios within extremely tight size
constraints. The latest silicon and package technologies allow the engineering of some parts of the
radio circuit to occupy less space on the board. The antenna, however, is subject to laws of physics
that allow only minimal flexibility. There are strict constraints on effective length to ensure
resonance at the desired frequency, while multi-radio designs require adequate spatial separation
between adjacent antennas to minimise unwanted coupling. With these limitations in mind,
recommended best practice is to prioritise the positioning of antennas that are the most susceptible
to disruption by others. This suggests working initially on the cellular antenna, if fitted, followed by
GPS, and subsequently Wi-Fi and Bluetooth antennas. Clearly, designers have more freedom to
optimise the relative positions if antenna placement is considered early in product design and not
tackled as an afterthought.
Choosing and placing an antenna on the circuit board demands a balance between achieving the
smallest possible solution and ensuring adequate radiation efficiency in the appropriate frequency
range. A planar inverted F antenna (PIFA) is a common choice for the cellular radio. It is resonant at
a quarter wavelength of the carrier signal, which allows for small size. The PIFA should be placed at
the corner of the board and the ground-plane length needs to be one-quarter the wavelength of the
carrier signal. A PIFA for a low-band 5G frequency, say 617MHz, will need a ground-plane length
of 120mm. It can be tempting to try reducing the length, to allow more space for placing other
components on the board. However, this will narrow the antenna bandwidth leading to sub-optimal
performance.
More recently, loop antennas made from stamped metal and designed for surface-mount assembly
have entered the market. These have high radiation efficiency and maintain stable performance in
the presence of metal objects placed nearby. This can be important if, for example, a metallic
component such as an RJ45 connector needs to be placed nearby on the board due to space
constraints. With suitable matching components, these antennas can be designed to cover the full
LTE band from 700MHz to 2.7GHz with high radiation efficiency. As a loop antenna, the ideal
placement for these is in the centre of the board edge, rather than the corner.
As far as GPS antennas are concerned, ceramic patch and ceramic loop types are often used. Their
circular polarised response ensures good sensitivity to the GPS signal, which also has circular
polarisation. This allows the use of small-size antennas, which can be helpful in applications that
place tight constraints on form factor. However, if the antenna is excessively miniaturised,
polarisation can cease to be circular, and performance is thus impaired. Antennas as small as 9mm x
9mm are available, although 18mm x 18mm and 25mm x 25mm are more commonly used.
The directionality of the patch type antenna makes this a good choice for applications where the
orientation of the equipment is fixed or can be controlled so that the antenna faces the sky
continuously.
Clearly, consistent sky-facing orientation cannot be assured in some mobile applications such as
wearable devices or asset trackers. In this case, a ceramic loop antenna can offer a superior
alternative, owing to its omnidirectional response. Also, these antennas can be smaller than ceramic
patch types. As with any loop antenna, it should ideally be positioned in the middle of a board edge.
For multi-constellation applications, multi-band ceramic-loop antennas are available off the shelf
from various manufacturers.
After selecting cellular and GPS antennas, and ensuring satisfactory positioning, the Wi-Fi and
Bluetooth radios can be considered. Typically, a loop antenna is preferred for optimal performance,
with greatest immunity to detuning by the presence of nearby antennas and components. They are
well suited for wearable products, such as smart watches, and can be ceramic or stamped-metal
types. For operation in the 2.4GHz frequency range, the half-wavelength dimension is 62mm.
Optimising for Best Performance
When the antennas have been selected and their notional locations determined, further work is
needed to finalise the position of each antenna to ensure optimal performance. Adequate isolation
between antennas is critical and is quantified by the S2,1 relationship between adjacent antennas.
The typically tight space constraints in portable and mobile applications mean some level of
interaction is unavoidable. However, coupling between any two antennas causes a proportion of the
antenna’s efficiency to be lost since a portion of power is coupled to the adjacent antennas instead
of being radiated. In the worst case, excessively strong coupling can cause interference. A GPS
receiver can be particularly susceptible to the influence of an antenna close by. In this case, an S2,1
of at least -15dB or, better still, -20dB, is recommended.
It is also vital to investigate any detuning effects caused by objects in the near field of any of the
antennas in the system. Specific antenna types, such as PCB-trace and wire antennas are more
susceptible to detuning. Although loop-type antennas, in particular, are generally robust and can
maintain performance under non-ideal conditions, the effects of objects such as the plastic enclosure
can shift the antenna’s resonance point away from the desired frequency and ultimately reduce the
signal strength at the receiver. Other hazards include the effects of cables and nearby metal objects
or surfaces, which can couple with antennas and impair their efficiency. The effect of the cover
glass of a display can be particularly acute. The stamped metal antennas mentioned earlier can be
more resistant to these effects.
On the other hand, internal FPC or PCB cabled antennas are not optimised to operate in free space.
Instead, placement next to a PCB or housing is expected. Some types are available with an optional
foam layer to provide extra decoupling between the antenna and an adjacent surface – particularly a
plastic, glass, or metal surface to optimise the efficiency.
Instead of developing a custom design from scratch, there is a wide range of wireless modules
available that are pre-certified for a given application. Providing added flexibility for connectivity
in IoT battery-driven applications where low power consumption is a critical factor, there are
combination wireless modules. These modules enable simultaneous yet independent operation of,
for example, WiFi for when high data rates are required and, when not needed, Bluetooth Low
Energy.
Chip Package System Development
System-on-chip (SoC), system-in-package (SiP), multi-chip modules (MCM), and discrete chips on
the PCB — which circuit design should we use when developing a cutting-edge, reliable IoT
device? Each system implementation has specific capabilities, offering pros and cons depending on
your priorities. However, if we’re looking to develop a product optimized for IoT use cases, a SoC
approach is your best bet.
SoC offers a level of flexibility, customization, and compartmentalization that traditional
implementations lack. And while SiP, MCM, and discrete-chips implementations can handle
complex operational demands well, SoC is designed to do so while saving energy and money
especially when the end devices are deployed in large scale.
By looking at the differences between SoC and more traditional system structures, it’s clear why
SoC is an ideal IC solution for products that demand IoT compatibility.
SoC is a type of integrated circuit that incorporates most of an electronic device’s hardware and
software components on a single semiconductor substrate to reduce space and increase efficiency.
As an example, SoC may include:
•Microcontroller Unit (MCU):The main controlling module for a product
•RF Module:The component responsible for wireless communication
•Non-volatile Memory:An internal unit that stores program code and other non-
volatile system information
•Volatile Memory:Another storage unit used to store temporary code and data
•Digital Signal Processor (DSP):An instrument that aids in receiving and
understanding signals from connecting devices and helps the MCU execute complex
programming
•Peripheral Ports:Physical joints where external devices like USB drives, Ethernet
cables, etc., can physically connect to an apparatus and communicate with it
•Power Management:A dedicated circuit to manage and optimize the power
consumption of different working modes of the SoC
Because SoCs aren’t subject to specific developmental standards, they can be tailored to fit
applications with their own list of requirements.
Today, IoT developers are looking for high functionality and performance at an affordable price
point. To this end, the SoC approach offers a few key advantages over the other less integrated
options. For instance, the structural nature of multi-chip implementations require extensive supplier
sourcing, making manufacturing costs higher than those for SoCs. Multi-chip options also demand
more time during the research and development stage to ensure that all components are
appropriately arranged to work in unison, further extending a product’s time to market. On the other
hand, SoC-based devices are much easier to produce from start to finish since they require only one
chip to work with, considerably reducing a product’s time to market and preserving battery life that
would otherwise be drained by additional communication between several ICs.
Ultimately, SoCs are designed to function holistically, allowing system processes to work better,
faster, and with minimal latency, making them ideal for IoT applications that depend on achieving
the performance in the most efficient way.
There are three conflicting challenges in the chip packaging system development with the IoT. First,
the cost needs to come down, particularly for edge devices. Second, these devices need to be semi-
customized or fully customized for different markets. And third, there needs to be more pre-
processing in devices to limit the amount of data being moved around. To resolve these competing
demands we’ve to introduce simpler processing on the edge nodes and to make that easily
affordable to SoC design teams. It easier to design these devices by pre-assembling the different
elements in systems, which are ready-to-use and easier to customize. This also includes security,
because security is an increasingly important area of IoT. We have migrated TrustZone technology
from the mobile world to the IoT. We have extended that to the subsystem, making it easier to use in
a subsystem context. All of this is designed to make life easier for IoT design teams. It’s a better
way than to design something from scratch, assembling small bits of IP. It’s a kind of LEGO play,
where we plug in the processor, memory and data compression.
The IoT edge devices have four domains, namely they are an analog sensor or sensor,signal
conditioning or analysis, processing that does processing, or perhaps pre-processing and then there
is a communications interface, which is used to send that data up to the real world. If we look at
these devices, we have the challenge of integrating most of those domains into one system. That’s a
big challenge because not everyone has core competencies in all of those areas. That’s a tremendous
integration challenge. The biggest challenge is being able to take our MEMS model into analog and
mixed signal simulation, and deal with RF all as one integrated system. Ultimately that’s what these
devices are, and we have to pre-converge them just to be able to hit the target cost.
It’s important to remember that many of these devices are being produced by startup companies.
What’s vital to them is time to market and time to money. They’re burning cash, and there may only
be a certain amount of time they have to burn that cash. The more we can use pre-built units, the
more we can reduce the number of engineers required for projects and the months of development
and design work.
The market the device goes into may not be known. It may go into multiple markets. That creates a
real challenge for some of these IPs. For what standards should they be created and tested? It’s one
thing if we know something is going into the automotive market to design it for the automotive
standard. It’s another if we don’t know where the product will end up. But it’s very important to
have standardization of those IPs and the level to which they’re created.
There’s a lot of customization. we can do these whole systems as a custom piece, but very early in
the lifecycle we’ve got to get from time to money. So a lot of time companies will come up with a
few key building blocks that they can build around, and then add a few distinct pieces that need to
be done for that specific market. So if it’s a dedicated pressure sensor for truck tires, we build that
as a custom piece. And over time, as the market matures, we will do higher and higher levels of
integration until it’s finally one monolithic piece of silicon, which is the cheapest way to make
these.
In IoT we have two different effects. One is that we want to automate as much as possible to be as
efficient as possible. The other movement is to add as much flexibility as possible to follow the
evolution of the market. We can’t build configurability and be optimal at the same time. As an IP
provider we try to serve as broad a market as possible, but we know there are constraints that need
to be taken into account. we still need to leave some room for flexibility. Because in IoT we may
have a device that will be out in the field a long time and it will have to be upgraded regularly. We
have to have new features in the future. If we design an SoC that will be in the field for 10 years, we
will have to upgrade the firmware, the security, and make sure it is still usable 10 years from now.
There will have to be additional processing power and memory in our design to make sure we can
use it in the future.
Some of these designs will end up in places we didn’t expect, too, which could include safety-
critical markets. Those have completely different parameters. And we don’t know what security will
be like in 10 years. It’s difficult to convert something that is not functionally safe to something that
is. we have to design that from scratch. Security we can improve over time by upgrading the
firmware. But we do need to design in features that allow us to upgrade that security.
But something that is designed for a car’s infotainment system may be connected to other safety-
critical systems in the future. But we do have to manage different levels of safety in the car and the
interactions between different things. Security and safety are both very closely related because
they’re about a process rather than a product. At every level of our design we have to make sure it’s
functionally safe or that it’s secure. Those things have to happen not just across the design of the
semiconductor or the system, but across the design of the service they enable. So IoT security
touches every part of the supply chain. We have to think about it at every level or else it will fail.
It’s the same for functional safety. If you haven’t planned for it in advance, we won’t succeed. For
security, that means field upgradeability, and for both it means very careful attention needs to be
paid to this. It’s also a matter of upgrading the supply chain. If it’s manipulated along the way, that
can be a big problem, too.
We have to pick our design house carefully. They need to be able to deal with issues like
electromigration and rapid transient switching and many other things. So we really have to
understand how chip design companies have dealt with these issues before. They may have worked
in chips designed for satellites, and we need to look at their case studies and speak to their chief
designer. They need to be able to think these things through and have experience in designing them,
because a small IoT chip will face these kinds of issues in environments we may not know about.
There’s a lot the EDA industry has done to address some of those needs. That includes everything
from the addition of aging models to the compact models used for SPICE simulation. There are
electromigration and power analysis tools, which let us get a much better idea of where we’re at.
It’s not just simulation. It’s also static analysis. The EDA industry is taking this issue seriously, and
we’re trying to address that from a tools and expertise perspective.
Power Electronics
The power application may differ depending on the devices and their design. The highest power
consumption is done by devices with consistent operations and very little downtime. They utilize a
high amount of energy to run. However, their high energy can be significantly lower than traditional
equipment that needs massive energy inputs. So, devices that work extensively generally use lead-
acid or lithium-ion batteries, which can stay charged for longer durations.
Apart from that, there are certain IoT devices, like smart thermostats, that consume significantly
less power. As their operation is limited, they may take the room temperature, pass the information
to the system, and go into sleep mode. This saves plenty of energy for the device. Such devices
usually use alkaline galvanic cells that keep them running for extended periods.
Figure 1: Many Internet of Things devices will be made up of numerous components, each of which
will require varying amounts of power
It is pretty evident that however unique our IoT systems may be, they can not operate without
power. Modern technologies counter this problem by identifying new energy sources and
developing energy-efficient electronic devices. Many IoT-based companies offer complete solutions
to all kinds of IoT needs.
It must be clear now that IoT devices can operate using different power sources. Follow these three
steps when considering what power supply to select for your IoT-based system:
#1: IoT Setup Requirements
We must know our IoT setup requirements and consider if we wish to create a full network or need
a power source just for one or two devices. Then, depending upon our requirement, we can either
power our devices from an external network or purchase our batteries.
#2: Cost
Next, calculate the costs we will bear to power up our IoT setup. Explore all the options and
consider their costs. The most cost-effective way is taking power from an already established
network.
#3: Backup Needs
Do not forget about the backup needs we will require in our setup. If we are establishing an
industrial IoT setup, we will have to keep a backup system to help prevent downtime for our
machine and operations.
Power Supply & IoT Device Operation
There is a significant impact on the operations of IoT devices with different power supplies. The
following describes how the right power source for an IoT device can impact the system:
•Better Efficiency: Every digital device has set specifications of power requirements for
its ideal use. When you always provide the device with company-specified power input, it
performs with higher efficiency and delivers exceptional results with an increased
lifespan.
•Lowers Downtime: As the device works more effectively with the right power supply,
the maintenance needs for the device significantly lowers. This improves the working
hours of the device, and it provides higher uptime to the process.
•Higher Productivity: It is natural with upgraded efficiency and performance, the
device’s productivity also increases. So supplying the manufacturer-specified power to
your IoT devices can help you achieve higher work productivity.
Different Battery Types
The following are the different types of batteries used in various IoT devices.
Lead Acid Batteries
Lead-acid batteries are highly popular these days, especially because of their usage in electric
vehicles. The lead-acid batteries consist of acid and two electrodes made of lead. The anode is
made up of lead, while the cathode is made up of lead oxide. The electrolyte in the battery is
present in the form of acid. Lead-acid batteries have low internal resistance, which helps them
generate extremely high power. Large IoT infrastructures and machines generally require this
type of power supply.
Alkaline Batteries
These are the most prevalent battery types that are widely used in everyday life in our TV or AC
remotes, watches, many wireless devices, etc. The electrodes in this battery type are made of
iron and nickel, and alkali (NaOH or KOH) is used as an electrolyte.
Alkaline batteries usually have the limitation of a shorter life span. This is due to its nature to
generate greater internal resistance that leads to higher self-discharge operation. Though this
problem is now addressed through the creation of complex technology for intercalating
hydrogen into iron, the battery still lacks more power than other battery types. That is why this
type of battery is mostly used for portable and low-power IoT devices that require very
minimum power to operate.
Lithium-ion Batteries
These are highly prominent batteries used worldwide in all modern applications such as laptops,
cell phones, etc. Lithium-ion batteries are wrapped together in aluminum and copper foil. Inside
the foil lies a porous substance that is saturated with lithium electrolyte. The electrolyte in
lithium-ion batteries contains a base usually made of graphite, oxide, or salt of the metal. The
lithium ions are intercalated into the base to initiate the chemical reaction and generate a high
energy density.
Alternative Power Sources for IoT
Though chemical-based batteries provide greater performance and reliability to IoT systems, they
are extremely harmful to the environment. Hence, the need for replacement is being substantially
addressed, and more use of alternative power sources is being encouraged.
Solar Panels
This power source has gained immense popularity in recent years. It uses the photovoltaic effect in
semiconductors to generate power for IoT devices. Solar panels have a simple operation. When a
quantum of light hits the surface of a semiconductor, the electrons jump to a higher energy state,
producing power that can be leveraged for multiple purposes.
Thermoelectric Effect Power
This is a powerful option that works on the concept of the Seebeck effect. In this method, a
heterogeneous conductor is taken of which both sides have a substantial temperature difference.
Due to the falling temperature on one side, the ions and electrons move to the colder side and
generate sufficient electromotive force to power IoT electronic devices.
Atomic Batteries
This type of power source is very promising and is used in space programs to power satellites. In
this method, the isotope batteries are designed with nanodiamonds, and the batteries can generate
power for years. However, the limitation of this power source is that it only generates a small
current and needs an accumulator to store and constantly draw power from the source.
Figure 5: An EH PMIC handles the charging of the energy buffer and powering the application
Typical EH PMICS in the market today have a fixed architecture and input voltage range designed
to operate with a particular type of harvester. This precludes using an alternative harvester to
capture additional ambient energy if one source alone cannot satisfy the system requirement. If
several energy sources are needed, therefore, a dedicated EH PMIC is needed for each one. This
adds to the system cost, size, and power consumption, and can also complicate the design.
Electromagnetic Interference/compatibility(EMI/EMC)
For any IoT device to operate reliably, signal integrity (SI) and power integrity (PI) must be high.
This is particularly important in low-voltage or high-clock-frequency circuits, which are much less
tolerant of crosstalk. The four key SI challenges are around a single net, the couplings where
multiple nets meet, power distribution networks’ power and ground paths, and electromagnetic
interference (EMI). Designers can address these by minimising power delivery network impedance,
shortening the return path lengths, controlling impedances through interconnects, reducing coupling
by ensuring sufficient space between circuit traces, and through good shielding and grounding. PI
looks at how well source power is converted and transmitted to where it will be used. In the low-
power devices many IoT designers are creating, DC supply voltages must be delivered within
tolerances of just 1 %. These incredibly tight bands mean data and clock signals could be impacted
by any transients, ripple or noise on the supply rails. The challenge is to measure AC signals on
these rails – as the signals continue to get faster and smaller.
The huge range of use cases for IoT devices means lots of different wireless technologies and
standards are emerging and being used. Where self-driving cars will need highly reliable, high-
bandwidth connections, a sensor running off a small battery will likely use a short-range wireless
connection with a low duty cycle. Other devices, such as smartphones, support multiple wireless
standards (including Bluetooth, Wi-Fi, NFC and cellular). Designing equipment that supports
multiple standards makes measurement and testing increasingly complex, because each standard
will have different test requirements. Designers need to ensure their components can work together
effectively and adhere to more than one standard concurrently. On top of the design challenges,
testing compliance with multiple standards can be expensive if separate equipment is needed for
each standard. This is why many are adopting flexible, multi-standard testing instruments that allow
for the addition of new standards as these emerge. As the number of IoT devices expands, so
communications resources are becoming more crowded, particularly the (unlicensed) ISM radio
band.
For designer, this means ensuring their products will work effectively in busy signal bands, without
causing co-channel or adjacent-channel interference. This is essential if the products are to comply
with network and regulatory requirements. Moreover, given that many IoT devices will be operating
simultaneously and in close proximity to other equipment, they’ll need to undergo radiated and
conducted emissions and immunity testing. The tools used to test the devices must therefore also
comply with the relevant standards. Thus many Internet of Things devices will be made up of
numerous components, each of which will require varying amounts of power.
As the IoT continues to grow around the world, more and more RF-based, internet enabled devices
are being introduced to industries all around us. While the use of these devices continues to make
our world more efficient, data-driven, and optimized, the ever-growing presence of wireless devices
that utilize electromagnetic communication has created a profound design challenge for EMI/EMC
engineers.
Wireless RF-based devices utilize electromagnetic radiation to communicate amongst each other
and, as the number of other adjacent RF-based devices grow, these devices can be subject to
electromagnetic interference from other electrical devices. When these devices are not properly
protected from interfering radiation, and communication signal filtering is not properly
implemented, their communication ability and overall performance can be severely depleted.
EMC Coupling
Perhaps the first type of coupling that often comes to mind is radiated coupling, whereby the
electromagnetic radiation from one device couples into the circuitry of other devices, where this
may result in functional disturbances. This radiated coupling can be countered with RF shielding of
the independent devices to quickly resolve the issue. However, in some complex systems radiated
coupling can occur within a single device -- if this is the case, further EMC analysis of the circuit
design may be needed.
While specific EMC measures can be taken to mitigate EMI between separate devices, there are
many intra-device coupling phenomena that can create poor signal integrity causing communication
losses and power signals without ever being influenced by another device. There are four types of
coupling that can result from poor circuit layout design that unintentionally contains either induced
currents or current loops, which may lead to electromagnetic incompatibilities internally or
externally. Luckily, there are also various ways of avoiding these coupling paths and many
mitigation efforts that can be taken into account to reduce EMC.
For example, reducing the common trace length of different circuitry loops, or even implementing a
star-point topology, can help reduce the amount of galvanic coupling occurring in your circuit.
Reducing operating frequency and the length of parallel traces of different signal types decreases
the capacitive coupling. In order to reduce inductive coupling, it is best to minimize circuit loop
size.
According to Oracle, IoT devices will reach 22 billion by 2025 .These devices are diverse, too, with
applications ranging from consumer wearables to wireless sensors in factories.
From a design perspective, one of the prevailing concerns with IoT devices is spectrum noise. As
such, electronic systems worldwide are managed for interoperability by government regulations
with rigorous electromagnetic compliance (EMC) testing.
The basic goal of EMC is to ensure that devices placed in proximity to one another do not interfere
with the normal operation of a second device, known colloquially in the EMC world as the “victim
circuit." When a second device is victim to an aggressor device, it is said to suffer from
electromagnetic interference (EMI) at a certain level of susceptibility.
Multiple potential coupling paths exist between the emitting device and the susceptible device, and designers must
be aware of what mechanisms their devices might be subject to in the field.
EMC standards like FCC Part 15B, CISPR-32, and the RSS-GEN are nominally maintained by the
governments of the United States , European Union , and Canada , respectively.
Gaps in the return plane of a signal increase emissions at the point of the gap as Mr. Wyatt shows
in a small demonstration video from 2017. EMC can be achieved when designers consider EMC
compliance early in the design phases .
While this technique reduces peak emissions as a result of the clock, it can increase the complexity
in the clock circuitry.
Sensing and actuating are carried out at the lowest layer of the architecture, also referred to as the
device layer. The next layer up, the edge layer, enables the communication between the devices and
the application layer. Typically, this communication is enabled by semi-capable devices behaving as
hubs, collecting data from the sensors and relaying it into the cloud and sending commands to the
actuators as necessary.
With the key components of IoT in mind, we can now form a full definition of IoT, which is a
paradigm which enables interconnectivity in anything and everything to create monitoring and
control infrastructure which can be used in applications to enrich everyday user experience
To begin understanding the problem space of reliability in IoT, it is best to use the architecture, as
presented above, as a reference point. We can then observe reliability issues in each of the layers of
the architecture and understand how they contribute to the problem.
Device reliability
From a device perspective, that is the sensors and actuators, the first problem we can observe is the
highly constrained nature of these devices. These constraints concern battery, memory and
computational capacity. Battery is a concern for IoT applications, because often the application
layer is unaware of the remaining battery left on the device thereby making it difficult to determine
when the device requires a battery replacement. This battery life concern is further compounded
when we consider that devices may be located in places that are physically difficult or dangerous to
reach to replace. The memory and CPU constraints on the devices limit the device’s ability to store
complex encryption methods, meaning that IoT devices must rely on lightweight encryption to
protect the data being transmitted by the device.
Another issue evolves from the constrained nature of the devices when it comes to updating the
limited firmware of these low-powered sensors. It is impractical, due to the lack of power and
implications on battery life for the device, to connect to a cloud service routinely and check if new
firmware needs to be downloaded and installed on the device. This leads to a scenario where
devices could potentially be operating with outdated firmware, thereby leaving them vulnerable to
security breaches.
The sensors and actuators that are used in the IoT are often deployed in remote and distant
locations, and can often be subject to harsh environmental conditions such as heat, freezing
temperatures, mechanical wear, vibration, and moisture. So, there is a need to determine the “useful
life” period of a device, so that we can determine when the device needs to be retired. This useful
life will shorten if the device is employed in a harsh environment, therefore, we could expect to see
great variances of device lifetime for identical devices deployed in different environments, which
results in the system reliability being difficult to manage.
Another concerning aspect with regard to device reliability in IoT, is the propensity for sensors to
“fail-dirty”. This phenomenon concerns a scenario where a sensor continues to send erroneous
readings after having suffered a failure. This is a well-known, yet little understood, problem that is
pervasive in IoT environments. In particular, this issue is hard to diagnose because the sensor
appears to be operating normally. The impact of a false reading being sent in an IoT environment
can be critical, when we consider that actuation often has physical impact on human lives.
Communication and network reliability
Mobility is one of the key expectations of an IoT network, whereby users of the network can
dynamically move between applications while the device onboarding and identification happens
seamlessly in the background. Global addressing, however, is a difficulty in IoT, given that
manufacturers do not co-ordinate to provide globally unique identifiers for all IoT devices. This
means that the responsibility of assigning unique identification resides within the IoT network itself.
When we consider that IoT devices are expected to be mobile, this creates a problem given that the
device ID might differ across different networks, meaning that we might lose traceability of the
device. This then introduces a reliability concern when it comes to tracking or auditing the device as
it moves through different IoT applications.
Internet Protocol (IP) is the current de-facto standard for communication and identification in
traditional networks. IP in its current state is, however, not well suited to the IoT. Introducing new
protocols into this problem space will require these new protocols to mature quickly, which is not
always easy. This problem is exacerbated further when we consider the implications of unique
addressing. IPv4 has a 32-bit length address, which creates room for 4.3 billion addresses, keeping
in mind the predictions of 50 billion devices discussed previously in this paper, it becomes clear that
IPv4 is not suitable to fulfil the vision of IoT. This problem is further compounded by the fact that
IPv4 ran out of addresses in 2010. As such, it becomes necessary to implement a protocol with
suitable addressing space, such as IPv6, which boasts an address space of 128 bits, allowing space
for addresses. This new addressing space, however, creates problems for constrained devices, not all
of which are capable of handling the overheads required for the address.
A remedy to this large address overhead is offered by the 6LoWPAN protocol. 6LoWPAN is able to
compress the header size of the IPv6 packets in order to make them compatible with the IEEE
802.15.4 standard, and thus better suited to the IoT. These new and emerging standards to cope with
the new requirements of the IoT contributes to the creation of a landscape of disparate standards and
protocols among IoT devices and deployments targeted for communication for constrained devices
in IoT networks. Given the lightweight and constrained nature of some of these protocols, not all of
them feature quality of service (QoS) guarantees, meaning that the reliability of the network
connection becomes harder to assess.
A typical power supply for an electronic system is shown in Figure 1. The primary source of energy
is a battery, normally an electrochemical device [5]. The battery can be a primary type that is
discarded after it is discharged, or a rechargeable type. As shown in Figure 1, a fully charged
Lithium-ion battery supplies 4.2 volts and when the voltage drops below 3.0 volts it is recharged.
The electronic system is supplied a voltage V DD that is close to 1 volt or lower for modern
nanometer technologies. A DC-to-DC converter [2, 6] provides the voltage transformation as well
as the capability to vary V DD for power management. Because the current requirement of the
electronic system is often pulsed and time varying, decoupling capacitors are used to smooth the
transient ripples. The decoupling capacitors is, in general, distributed in the power grid of the
system. The size of a battery is specified in terms of the electrical charge it can supply. A Lithium-
ion battery of 400mAHr can supply 400mA for one hour. It will supply 200mA for two hours.
While 400mA is the rated current for this battery, up to three times the rated current or 1.2A can be
drawn for a duration of 20 minutes. However, a discharge rate higher than this can cause noticeable
loss in the internal impedance of the battery resulting in heating. This results in a loss of efficiency
as defined below. The time for which a fully charged battery can supply current before requiring
recharge is called its lifetime. Thus,
To avoid loss in efficiency, we must use larger battery. For lithium-ion battery 400mAHr is
considered a unit cell. Using multiple cells in parallel enhances the current capacity and lifetime.
Thus, a battery size N means a battery consisting of N unit cells. For example, a battery of size N =
5 will be rated at 2AHr
Battery Model
Analysis of the performance of a battery in a system requires an analyzable model of the battery. Of
the three types of models, namely, electrochemical, mathematical and electrical models, we use the
last one. Even among electrical models, there are several types. An excellent summary of various
kinds of models is given by Chen and Rinc´on-Mora [4] who also provide the model we have used
here. This battery model, as shown in Figure 2, consists of two parts described below
(A) Battery lifetime. The state of charge (SOC) is defined as 1.0 for a fully charged battery. It is
represented by a voltage VSOC , which ranges between 0 and 1 volt. The charge of the battery is
stored in a capacitor CCapacity whose value is determined as follows:
where Capacity is the AHr rating of the battery. Thus, 3600 × AHr is the total amount of charge in
coulombs. As the battery goes through cycles of charging and discharging its capacity to hold
charge is affected, reducing the usable capacity. That is represented by f1(Cycles). Similarly, affects
the usable capacity and that is represented by f2(T emp). For simplicity, we have assumed both
factors to be unity in the present discussion. The resistance RSelf−Discharge represents leakage
when the battery is stored over a long period. For reasonable time between recharge, this can be
considered to be large or practically infinite. The current source IBatt represents a source when the
battery is being charged or a load when the battery is powering a circuit. In the latter case, it is the
current being supplied to the DC-to-DC converter and to the circuit after conversion. When the
model is used to simulate the behavior of a battery that is fully charged, VSOC is initialized to 1
volt.
(B) Voltage-current characteristics. The circuit on the right in Figure above emulates the terminal
voltage of the battery as it supplies current. This part is linked to the part on the left by state of
charge (SOC), a quantity in the (0.0, 1.0) range. VOC (SOC) is the open circuit voltage. For
Lithium-ion batteries, Chen and Rinc´on-Mora empirically derive expressions for the circuit
components, which all depend on SOC:
VOC (SOC) = −1.031e −35×SOC + 3.685 + 0.2156 × SOC −0.1178 × SOC2 + 0.3201 × SOC3
RSeries(SOC) = 0.1562e −24.37×SOC + 0.07446
RT ransient S(SOC) = 0.3208e −29.14×SOC + 0.04669
CT ransient S(SOC) = −752.9e −13.51×SOC + 703.6
RT ransient L(SOC) = 6.6038e −155.2×SOC + 0.04984
CT ransient L(SOC) = −6056e −27.12×SOC + 4475
To the original model in Figure above we have added a zero-voltage source, VSense. This is done to
facilitate Hspice simulation in which we must specify the value IBatt of the current source of the
battery lifetime portion as equal to the current through this voltage source VSense. The current is
sensed as positive if it flows into the positive terminal of VSense.
Finding the Right Battery The analysis to find a matching battery for an electronic system contains
several steps:
• Step 1 (Determine circuit characteristics). The circuit is simulated for several supply voltages (V
DD) to find its critical path delay. This gives the clock frequency for each V DD. Using the
corresponding clock frequency, the average current consumption is determined for each V DD.
• Step 2 (Determine smallest battery size). The model of the selected battery type is simulated for
various current loads obtained in the previous step. Every battery type has its terminal voltages
corresponding to fully charged state and fully discharge state. Using the load current, scaled for the
ratio of battery voltage to circuit V DD, the battery model is simulated to determine the terminal
voltage as a function of time. In practice this scaling is achieved by a DC-to-DC converter that is
known to have high conversion efficiency (greater than 90%). Alternatively, the circuit of DC-to-
DC converter can be attached to the battery model. The time between the fully charged state to the
fully discharged state gives the battery lifetime in time units (seconds). This is repeated for
increasing battery sizes, normalized with respect to the smallest unit. A lower bound on battery size
is determined for a minimum of 85% efficiency. While the selected battery should not be smaller, its
actual size is determined by the recharge interval requirement of the system.
• Step 3 (Determine minimum energy modes). The previous step determines two battery sizes,
namely, the smallest usable battery that meets the performance requirement and another size that
can meet both performance and recharge interval requirements. We now determine maximum
lifetime modes for each battery. In this mode the performance requirement is completely relaxed
and the supply voltage (VDD) is determined for maximum lifetime in clock cycles. For some
nanometer technologies, this VDD can be below the sub-threshold voltage.