0% found this document useful (0 votes)
42 views82 pages

02 Realtime Social Media Analytics

Helpful to understand applications of analytics in Social media
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views82 pages

02 Realtime Social Media Analytics

Helpful to understand applications of analytics in Social media
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 82

TELECOM ANALYTICS

Realtime Analytics & Social Media Analytics

Abstract
This document contains information about real-time analytics and sentiment analytics.
Information provided in this document used for the teaching purpose only and copy right
belongs to the respective copyright owners.

Dr. Shirshendu Roy


Realtime Analytics

Event-Driven Analytics: What You Need to Know

In business, you need to know how your business is doing day-to-day. In order to
understand the health of your company, many turns to Data-driven intelligent operations
that are helping companies in many different industries to become more effective and
efficient. As the amount of data generated by sources inside and outside the company
increases, and as software tools are readily available to process that data immediately,
and as competition in every sector intensifies, intelligent business operations are
becoming increasingly practical.
Systems set up for complex event processing monitor the streams of data flowing through
an organization, and, based on business rules, investigate relationships between
individual events that may signal the occurrence of some larger event.

What is Event-Driven Analytics?

Thus, event-driven analytics means the form of analytics that is performed based on a
certain event happening. This term is often used to refer to analytics that happens based
on a certain action being done. As data streams arrive in real-time, companies are using
complex event processing (to process data in real-time and extract consumer data)
to predict customer behaviour and recommend products and services to clients to
motivate real-time decisions.

Events are mostly interactions a user has with any form of content. In a broader
framework, event analytics is covering real-time analytics along with the processing of
historical event data. Analytics in this sense help to provide insights on what the events
mean and how each of the events is treated. An event in a program can generate data
that can be compiled to capture detailed information about how users interact with it.

Then, it could be a customer selecting a product or service for payment, a customer with
a more negative sentiment than usual that a business needs to address quickly, or some
misinformation that a company wants to catch before it impacts its stock price. Events
can be generated by users, such as the strokes of the computer keyboard or mouse
clicks, or by the IT system itself, such as program errors.

A user-generated event can be a newsletter sign-up, a product purchase, a click-through


to a link, and so on. The goal of event data management is to track all of that data, analyse
it, and turn it into dashboards or reports that your enterprise users can use to make
decisions. So, in short, event data management is about capturing the actions of people
in a business through interaction with the software.

1
Here are some ideas to keep in mind when you are developing your own event data
management strategies:

1. Start by setting up an Event Tracking System (ETS) for events within your
environment that don’t require any hardware or software purchases and will make
life easier for you down the road.

2. Start simple. Capture events with a set of rules, or a form template, and track a
user’s input as a string of text. Do not capture data as it is entered, rather capture
events as you define them.

3. The more flexible the system, the better. A system may include features such as
capturing images from forms or using JavaScript code on websites to integrate
functionality into your programs, or external tools that you use on a regular basis.
Events are the purpose of an ETS; they are not an end in themselves. A best
practice for creating an event tracking system is to first create the event types that
the majority of users will enter into your system, then define the more complex
requirements that may be needed for more advanced tasks.

4. The larger the volume of events you are capturing, the more likely it is that you will
want to consider distributing the data.

Benefits of Event-driven Analytics

1. Better focus on value creation by looking at higher levels of detail. An event-based


approach offers much greater focus than a time-based one does by ignoring other
components and only viewing what has occurred at a given time in time-frame.
2. Easy to expand. Event-driven solutions allow for easy expansion by allowing you
to add new data sources or move data from existing sources.
3. Real-time correlation with external systems allows for better
integration and reporting capabilities as well as supporting real-time dashboards,
alerts, and reports.

Event-based analytics helps build the data warehouse in near real-time for your real-time
analytics needs. This means even as your business transactions are happening, your
warehouse is being developed, too.

Use Cases of Event-based Analytics

Increasingly, companies are leveraging event-driven analytics to identify event patterns


and their impact on business trends and key performance indicators. Many companies a
re tracking and acting fast on this information using complex event processing to keep u
p with competitors, or in some cases, identifying and resolving customer or operational i
ssues as soon as possible.

2
Some real-world uses for event-driven analytics are:

In the area of cybersecurity, event-based analytics is used to identify zero-day attacks. A


n event-driven system is set up to capture data as it arrives and processes it in real-time
. These can be instances of particular attacks, such as Distribute Denial of Service,
attacks by botnets, and so on. In the area of telecommunications, event-driven
analytics is used to build converged network services to provide a highly efficient
customer experience. For example, of event driven marketing, an end-to-end SIP call
analysis service can detect if a specific call was dropped due to an IP conflict betwee
n two endpoints or that specific voice quality was not achieved after calling into multiple
numbers from one place or that certain countries/cities have higher call drops than other
countries/cities. Other examples of event-based analytics include high-capacity IP video
calls, IP telephony to IP video calls, and the like.

Event-driven analytics is also used a lot in stock markets. It is also used in credit card
fraud detection.

In conclusion: The success of businesses has increased, and the number of operation
s that are both intelligent and efficient is increasing. Companies are increasingly able to
use data quickly in order to make decisions that are both effective and practical. Event-
driven analytics is helping to increase the efficiency of companies.

What is Network Performance Monitoring?


Network performance monitoring is a continuous process of measuring, monitoring, and
optimizing the performance of telecom with the help of performance monitoring tools.

Simple network management protocol gives an organization using network monitoring


tools the ability to quickly identify the devices connected to the network, monitor the
performance of the network, keep track of changes to the network, or determine the status
of network devices on a real-time basis.

Another type of telemetry is the network flow data containing protocols, like NetFlow and
jFlow. It can also be labelled as a digital record with information about the connections
made over a network by keeping track of details like the source and endpoint IP address,
the duration of communication between them, how often the communication takes place,
and its duration.

A different form of telemetry network performance tools employ is network packets


carrying the data around as packets on a network. Capturing these packets help diagnose
and solve network challenges, including assessing a cloud application, planning a
migration to the cloud, or network capacity planning.

Importance of Network Performance Monitoring


3
Since businesses today rely on a strong and secure network to run their operations and
deliver a smooth and user-friendly experience to their customers, monitoring becomes
extremely important, and any deterioration in the quality of your network performance
impacts business. Businesses of all sizes need a performance monitoring tool to help
them monitor problems, anticipate potential disruptions, and address them beforehand.
Continuous network monitoring helps them maintain a congestion-free network up and
running.

Organizations can also gain an immediate ROI with the right set of network performance
tools, as the IT team can focus more on the more critical tasks rather than manually
digging and checking on the network performance. Having a network performance tool at
their disposal also helps organizations meet their service level agreements concerning
network availability, thereby ensuring a satisfied client base. Network performance tools
also help validate whether there is a need to upgrade servers by looking at the historical
records of your server performance.

Benefits of Network Performance Measuring Tool


Good customer experience is quintessential for companies, and this also becomes the
prime reason to monitor network performance. A network performance tool helps identify
slow-performing devices along with findings of such poor performance. These tools focus
on the start and end of the data path, focusing on each network node to examine the
performance at every point on the network. It also helps monitor the data transfer rate at
every signal node and single out devices with poor performance.

Apart from issue identification, a good tool also helps ensure continuous uptime, prevent
outages, and detect and troubleshoot CPU and server bottlenecks like utilization, speed,
and idle time. Moreover, a good performance monitoring tool allows you to identify
bandwidth hogs early on, enabling improved performance. A fully loaded performance
monitoring tool also helps monitor and reduce packet loss due to errors and discards
indicative of a problem with the switch or the device interacting with the switch. A network
performance monitoring tool can also help you monitor your WAN connections, check
WAN latency, and identify network traffic across your WAN network. This enables you to
allocate resources accordingly to prioritize traffic and react proactively to issues affecting
your WAN network.

Key Network Monitoring Tool


While network monitoring tools focus on aspects like performance monitoring, fault
monitoring, and account monitoring, they’re also used to examine components such as
applications, email servers, and more. While there are several network monitoring tools
available in the market, choosing the right device with in-depth research and tracking
capability is challenging.

 SolarWinds Network Performance Monitor


 Nagios

4
 Zabbix
 Spiceworks
 Icinga
 PRTG Network Monitor
 Site 24×7
 Atera
 ManageEngine OpManager
 Zenoss Cloud

Network performance monitoring is the process of visualizing, monitoring, optimizing,


troubleshooting and reporting on the health and availability of your network as
experienced by your users. Network Performance Monitoring (NPM) tools can utilize
different types of telemetry, including:

 Device metrics, such as Simple Network Management Protocol (SNMP), Windows


Management Instrumentation (WMI), Command-line interface (CLI), Application
Programming Interface (API), logs, synthetic tests.
 Network flow data, such as NetFlow, jFlow, IPFIX, etc.
 Packet data. Everything you do on a network requires packets; it carries data
across the network.

How does Network Performance Monitoring work?

Network Performance Monitoring solutions traditionally collected data from a variety of


sources: SNMP, flow data, and packets. Each provides a different perspective on the
problem that when combined, provides a complete understanding of the health of your
network and the applications running over it. Network Performance Monitoring solutions
are typically available as hardware, virtual and cloud software so you have complete
visibility across hybrid or multi-cloud environments.

SNMP

Simple Network Management Protocol (SNMP) is used to manage and monitor network
devices and their functions. One of the most widely supported protocols, it provides the
information necessary for fast detection of network infrastructure outages and failures.
SNMP delivers critical information and network diagnostics about network device and
interface availability and other performance indicators, such as bandwidth utilization,
packet loss, latency, errors, discards, CPU, and memory. SNMP is supported on an
extensive range of hardware—from routers and switches to endpoints like printers,
scanners, and internet of things (IoT) devices.

Flow data

5
Flow data is generated by network devices, such as routers, switches, and other devices.
Like a phone bill that tells you who you spoke to, how long you talked, how often, etc,
Flow data provides similar information. We know who communicates with whom, when,
how much data was transfer across the network, how long, how often, etc.; but we do not
know what the subject of the conversation was.

Benefits of using NetFlow for monitoring network traffic

Monitoring and analysing flow data helps obtain valuable information about network
users, peak usage times, and traffic patterns. In contrast with SNMP data, flow-
based network monitoring can understand the traffic patterns, provide a holistic view
for monitoring network bandwidth utilization and WAN traffic, support QoS validation and
performance monitoring, and it can also be used for network troubleshooting.

Full packet capture

Packet capture is the term for passively copying a data packet that is crossing a specific
point in a network and storing it for analysis. The packet is captured in real-time and
stored for a period of time so that it can be analysed. Packets help diagnose and solve
network problems such as:

 Troubleshooting detrimental network and application activities


 Identifying security threats
 Troubleshooting network errors, like packet loss, retransmissions
 Understanding capacity issues
 Forensic network analysis for incident response

There is a saying at Riverbed that “Packets don’t lie.” A packet consists of two things:
the payload and a header. The payload is the actual contents – a voice call, an email
message, etc. While the header contains metadata, including the packet's source and
destination address. Entire packets or specific portions of a packet can be captured and
retained.

The Riverbed Solution

The Riverbed Unified NPM unifies device monitoring, flow monitoring, and full packet
capture and analysis solutions. These solutions are tightly integrated together so that you
can more quickly troubleshoot complex performance issues. They also integrate into
the Riverbed Portal that combines them into collated, easy-to-use dashboards that
streamline analysis of complex problems.

By blending Application Performance Monitoring (APM), End User Experience Management


(EUEM), and NPM data from industry-leading tools, Riverbed creates a dynamic map of
network and application performance. Different business and IT teams can gain a
complete picture of the environment. Instead of wasting valuable time and resources, you
can rapidly diagnose and fix service issues before end-users’ notice.

6
Ten Steps to Better Application and Network Performance

01: Pinpoint the problem: Voice, video, and social media traffic can drag your network
performance down. Evolving application architectures, increased network latency and en
crypted web application traffic require sophisticated monitoring tools that can track
holistic performance across your hybrid infrastructure by application, location, and user.
Then, you can quickly and accurately identify the source of any problem.

02: Fix issues before they become problems: No one likes finding out about
performance problems from end users. New application frameworks using containers an
d microservices have created multi-dimensional data flows prolonging incident resolution
s. Incorporate AIOps capabilities to collate and correlate volume, velocity and variety of
datasets to respond proactively even before users notice.

03: Integrate and automate infrastructure management: Move beyond identifying


performance problems and managing disruption incidents. Fully integrate and automate
critical network management functions like monitoring and change management, as well
as network audits, real-time topology, and inventory management.

04: Get proactive about cybersecurity: Organizations are moving towards zero trust
networks. Cyber threat hunting, incident forensics, threat intelligence and real-time
network security analytics are key for detecting disruptions in performance, like zero-day
threats, and unauthorized intrusions across the network.

05: Assure cloud performance: The primary reason to manage network performance
is to ensure users can access their applications in a timely manner. Multi-cloud IaaS
creates visibility blind spots for Cloud Ops which can be effectively solved with packet
and flow monitoring. Ensure better user experience and maximize business productivity
with end-to-end monitoring across the hybrid infrastructure.

06: Consolidate your tools: Less is more. A good performance management solution
can consolidate the number of tools you use to monitor and troubleshoot your network,
applications, infrastructure, and end-user environments, and provide a single, integrated
view across domains.

07: Understand the dynamic context of each business service: It is critical to unders
tand business dependencies on applications and infrastructure during service migration
for digital transformation. Modelling a business service by automated mapping of its
components users, locations, application servers, authentication services, web servers
and traffic between services—are critical to assure business continuity.

08: Monitor performance: SLAs of SaaS applications Service level agreements are
even more critical as organizations implement SaaS applications, which they do not
have direct control over. IT needs visibility to measure and ensure vendors adhere to the
agreed upon levels of service. Synthetic monitoring provides insights into availability and
performance of SaaS applications to hold vendors accountable.

7
09: Help IT Ops manage hybrid environment: Align your teams to drive coordinated
action. Employ comprehensive service dashboards across hybrid and multi-cloud
infrastructure with role-based access for a common, integrated view of all component
data. With access to the same data, everyone can respond quickly and strategically
based on a unified understanding.

10: Plan for the future- Look ahead: Will there be new services rolling out? Will you be
using more cloud services and mobile apps? What’s your rate of adding new end users?
By asking such questions in advance, you’ll be able to better align your network and IT
resources with your business’ evolving priorities. Establish a clear picture of what’s
currently happening on your network today so you can better plan for tomorrow.

SolarWinds Network Performance Monitor

SolarWinds® Network Performance Monitor is a multi-vendor network monitoring tool


designed to scale and grow with the requirements of your environment while being cost-
effective. While it takes less time to set up, it’s easy to navigate and can track network
issues where relevant teams get a notification using its advanced alerting feature. These
alerts can also be customized based on simple or complex nested trigger conditions, and
for deeper visibility, it gives network insights and intelligent maps. The PerfStack™ feature
gives the users the ability to compare different data types side by side—simply drag and
drop the metrics to the chart, and PerfStack will overlay them for natural correlation and
performance analysis.

8
SolarWinds NPM constantly monitors the performance and availability of network devices
and helps troubleshoot when problems arise. With its intelligent network alerting feature,
it apprises you of key performance metrics crossing your pre-defined thresholds, making
you the first to know when issues occur. It also helps resolve network connectivity issues
quickly, as it tracks and displays your current and historical performance metrics in
customizable charts and dashboards.

SolarWinds NPM also uses packet analysis to help you figure out if it’s the application or
the network causing poor user experience, helping accelerate resolution time. A few of
its key features include:

 Advanced network diagnosis and troubleshooting for on-premises, hybrid,


and cloud services with critical path analysis.
 Hop-by-hop scanning along the critical paths by helping view performance,
traffic, and configuration details of devices and applications on- premises,
in the cloud, or across hybrid environments with NetPath™.
 Response time, availability, and performance of network devices is
monitored and displayed, improving operational efficiency with out-of-the-
box dashboards, alerts, and reports.
 Automatically discover network devices and typically deploy in about an
hour.
With SolarWinds, you get fault, performance, and availability monitoring along with quick
detection and diagnosis. It has a customizable topology and dependency-aware
intelligent alerts system capable of responding to a wide range of condition checks,
correlated events, network topology, and device dependencies. It can automatically
discover and map devices, performance metrics, link utilization, and wireless coverage,
and automatically calculate exhaustion dates using customizable thresholds based on
peak and average usage. With its comprehensive monitoring of the F5 BIG-IP family of
products, you can visualize and gain insights on the performance of F5 service delivery
environment, and its drag-and-discover feature lets you create an interactive performance
chart network with real-time network performance metrics alongside interactive charts
and graphs from your network devices

9
Nagios

Nagios is an open-source tool with an interactive web interface to monitor performance.


It lets you track the current network status with your host status totals and service status
totals, and the GUI is color-coded to help you see elements quickly.
Nagios supports external plug-ins to create and use in the form of executable files, plus
shell scripts to track and collect metrics from every hardware and software in your
network. Nagios lets you monitor performance events through its color-coded alert system
that sends out notifications by email and SMS, making it easier to prioritize critical alerts.
You can also use APIs to integrate other services into it. You can also check thousands
of community plug-ins readily available on the Nagios Exchange if you need more options
and additional features.

Nagios has two different plans. One is an open-source, free version limited to monitoring
capabilities, and the other is a commercial version

with added features along. The commercial version is accessible due to its external plug-
ins and gives a centralized view of the entire IT infrastructure,

including detailed device status information. The tool has escalation capabilities and
proactively measures network downtime.

10
Zabbix

Zabbix is a free open-source tool with real-time monitoring of metrics collected from
servers, virtual machines, and network devices via multi-platform Zabbix agents, SNMP
and IPMI agents, agentless monitoring of user services, custom methods, calculation and
aggregation, and end-user web monitoring. It combines network, server, cloud,
application, and services monitoring in a single unified solution. Its auto-discovery feature
lets you locate network devices automatically and adds them to monitor. Zabbix can also
detect configuration changes automatically, so you can tell if a device has been updated
or not. Its preconfigured templates make it easier for you to configure the program for
monitoring. (For example, there are specific preconfigured templates for vendors such as
Cisco, Hewlett Packard Enterprise, Dell, and Intel to name a few.) You can access more
templates from the community site. Zabbix is a top Linux network monitoring tool, and its
auto-discovery and templates make the program straightforward to deploy. Zabbix is
known for the ability to customize its monitoring capabilities via any scripting or
programming language.

11
Spiceworks

Spiceworks is a popular free network monitoring tool. Its web-based GUI allows you to
access the dashboard and monitor the availability of your network infrastructure in real-
time. It also integrates with Spiceworks Inventory and Cloud Help Desk to help you track
your network inventory more effectively. For example, with Spiceworks Inventory, you can
track information on IP addresses, OS, MAC addresses, and more. Its alerting system
lets you know about performance fluctuations via email.
Spiceworks can prove to be an ideal choice for smaller enterprises looking for an
affordable network performance monitoring solution. It lets you start monitoring typically
in minutes and the connectivity dashboard is simple to set up. All you have to do is to
install the monitoring agent on any workstation/server, configure the application URLs,
and start monitoring. Spiceworks helps users gain real-time insights and spot sluggish
network connections or overwhelmed applications either hosted in the data center or the
cloud.

12
Icinga

Icinga is considered a Nagios backwards-compatible tool, meaning if you have a Nagios


script, you can port it over with relative ease because it has been developed on top of
MySQL and PostgreSQL. Icinga is an open-source network monitoring tool that monitors
the performance of your network, cloud-service, and data center and can be configured
through the GUI or with any domain specific language (DSL).
To get an overview of performance, you can log in to the GUI and use the dashboard to
see whether there are problems with performance or availability and color-code them
according to severity. Extensions, or Icinga Modules, allow you to add more functions to
the program. For example, the Icinga Module for vSphere lets you monitor virtual
machines and host systems. And from the Icinga Exchange, you can download a whole
array of community-created plugins for free.

Overall, Icinga is a scalable solution that provides control over how you manage your
environment. It’s available for Red Hat, Ubuntu, Fedora, Raspbian, and Windows and its
main features include an efficient monitoring engine across the entire infrastructure. It
also gives you the power to watch any host and application, and the monitoring engine
can monitor a complete data center and clouds.

It comes packed with an appealing web UI where custom views can be built by grouping
and filtering individual elements and combining them in a custom dashboard.

13
PRTG Network Monitor

PRTG leverages SNMP, packet sniffing, and WMI to monitor networks. Its intuitive
dashboard lets you see new alarms, critical devices, warning devices, and healthy
devices. Each of these is color-coded and used in the main screen display to denote the
health status of devices. It also includes SNMP monitoring, bandwidth monitoring,
scanning for devices by IP segment, custom dashboards, threshold-based alerts, reports,
and customizable network maps. Its other features include:
 Comprehensive sensor type selection
 Remote management via a web browser, Pocket PC, or Windows client
 Multiple location monitoring
 Alerts and notifications for outages via email, ICQ, pager/SMS, and more

14
Site 24×7

Site 24×7 is a cloud-based network monitoring tool that uses SNMP to monitor your
network. It’s easy to configure and automatically discovers SNMP devices by IP range.
Users can start monitoring out of the gate without setting everything right from scratch.
You can track various metrics such as memory utilization, CPU usage, disk utilization,
active session count, and more.
A bulk of user monitoring experiences take place through the health console, where you
can get an overview of device performance. It also comes with network maps to see the
network topology, allowing more visibility to a problem. It takes care of SNMP and the
application and website monitoring, including user experience, making the transition
between onboarding, automatically discovering devices, and using device templates very
smooth.

15
Atera

Atera is a network monitoring tool designed for use by managed service providers (MSPs)
and hosted on the cloud. This tool has a range of monitors, including server and
application monitors. However, the client site will require agents installed on it. The mobile
application enables system administrators to check in on the status of the monitored
network from anywhere. Because Atera involves a high degree of automation, the ability
to log in to the network from any system is essential. The tool monitors the client’s network
and only requires human attention when a problem arises.
Atera also provides a mechanism for network discovery and is available in three versions:
Pro, Growth, and Power. It also has a robust reporting, analytics, and alerts features.

16
ManageEngine OpManager

ManageEngine OpManager monitors network operations. Although this tool has server
monitoring capabilities, network monitoring is the core function of OpManager.
ManageEngine OpManager doesn’t oversee the traffic inflow but follows the status of
network equipment.
Its wireless network monitoring module comes in handy as it displays the signal strength
of wireless access points on the premises. OpManger is available in four editions: Free,
Standard, Professional, and Enterprise. A few of its features include:

 Uninterrupted monitoring of Wide Area Network (WAN) link availability


 Checking VoIP call quality across Wide Area Network (WAN) infrastructure
 Visualizing the automatic L1/L2 network mapping

17
Zenoss Cloud

Zenoss Cloud is a performance monitoring for multi-cloud environments. With this


performance monitoring tool, you can monitor everything from network devices to
applications to logs to event data in real time. Its intelligent dashboard provides you with
a complete perspective of performance concerns. It uses machine learning-based
anomaly detection to automatically detect performance deviations. Once an anomaly is
detected, root-cause analysis helps the user drill-down to the root cause of the problem.
A range of plug-in extensions called ZenPacks are available, encompassing everything
from devices to services. There are open-source plug-ins for Amazon Web Services,
Apache HTTP Server, Dell, Docker, HP, and Cisco, to name a few.

SUMMING UP

Network performance monitoring solutions can help companies prevent a disaster even
before it happens. They can deliver visualizations on key performance metrics while
automatically generating performance reports, which ideally include both recent and
historical data. While it’s useful for tracking the performance of your systems, it also helps
organizations deal with security threats invading networks. With an accurate performance
monitoring tool, organizations can be alerted to events that might indicate the presence
of malware in a system, for example, abnormal data transfers, failing systems, etc.

To truly understand network performance, organizations must observe every aspect of a


network because any portion of it businesses skip monitoring could be a cause of
performance issues and in turn might affect the entire environment. Network monitoring

18
tools are a must-have to keep networks organized and well-maintained—they provide
visibility to manage devices and help ensure they’re available around the clock for the
organization and its users.

4 Foundational Steps to Improving Service Delivery in Government

Improving service delivery in government comes with unique challenges. Governments


must be accountable to citizens in a way that the private sector is never constrained by.
While a private sector business can identify its target audience and deliver an experience
to cater for this specific subset, governments must ensure they don’t prioritize any
demographic over the other.
This inherent and necessary fact can lead to frustration with government service delivery,
and is a key reason why government customer satisfaction falls so far behind the private
sector. However, it’s not all doom and gloom. With digital channels and technologies built
for the public sector and its specific needs, there are many strategies that can be adopted
to improve service delivery in government. Here are the top five.

1. Start with efficiency

When improving service delivery in government, efficiency is the first building block. This
becomes clear when looking at the way government leaders speak to success in
programs and services. The delivery speed of a new program implementation, the
engagement level of citizen stakeholders, and value provided, are all now more widely
used as barometers of success than traditional economic markers.
“The reality of the public sector today is that it is assessed by the efficiency of its service
delivery. No longer is the effectiveness of the public sector measured by the revenue it
generates or the employment it provides…” – R Chandrashekhar, Additional Secretary,
eGovernance, Government of India
With a need to drive down costs and prove value, improving service delivery in
government today requires a focus on digital innovation. Live chat is becoming central to
this as a way to provide citizens’ channel of choice, with an eye on maximizing resources.
75% of consumers prefer live chat over any other communication channel, while live chat
costs 1/3 the cost of traditional phone support.
The efficiency of live chat also helps government agencies to handle higher volumes of
inquiries while providing better and faster service to the public. WCB Manitoba introduced
Comm100 Live Chat and saw significant benefits to their agent’s efficiency and quality of
service.
Live chat also includes a variety of efficiency features that aren’t possible with traditional
phone support:
Canned Messages: Canned messages are pre-written messages that can be used to
quickly respond to common questions and greetings. This saves agents considerable
time as they don’t need to retype repetitive responses. Agents can also personalize these
messages to add a little personality to the conversation.

19
Chat Routing: This feature allows incoming messages to be automatically routed to the
right agent or department based on the answers the citizen gives in a pre-chat survey, or
using pre-set rules based on the citizen’s geographic location, previous conversation
history, or the page they initiated the chat from. This saves agents (and citizens) the time
spent on transferring the chat between agents, while improving the overall service
experience.
Co-browsing: The co-browsing feature allows agents to instantly view and interact with
a citizens’ web browser. If a visitor has difficulty navigating a web form or finding the right
resources, agents can intervene and more efficiently provide support. Best of all, privacy
is maintained through automatic masking, and since co-browsing is browser-based, no
downloads are required.

2. Keep chipping away at siloes

Hierarchy in any organization is common, and no more so than in government


organizations where these structures are often tied to legislation or funding. This causes
major information and knowledge siloes that damage service delivery. With that said,
there are still many ways that governments can break down these silos, with digital
omnichannel platforms being one.
Using a digital platform like Comm100 Omnichannel, every digital communication
channel is connected through one unified platform. This means that all citizen
touchpoints, conversation histories and essential data are all visible in one console. If a
citizen reaches out via email and then later follows up using live chat, an omnichannel
platform lets the agent access all previous communication. This means that citizens are
not repeating themselves, and agents can provide more personalized service.
With all this data then gathered into one platform, government agencies can also gain
insight into their citizens. They can understand what they are searching for, identify
common problems or questions, and learn how they feel about certain policies or issues.
Santa Fe County recognized the importance of this data to help build an accurate image
of their citizens when they implemented Comm100 Live Chat:
“Getting access to all the data that comes through Comm1000’s platform is gold. We
present a lot of this data to our administration to show them what our constituents are
asking for, what they need, and how they feel.” – Santa Fe County, Tommy Garcia,
Quality Control Program Manager
From an organizational perspective, digital omnichannel platforms mean that government
service areas can be more strategic with digital channel support. Instead of juggling
multiple contracts with CRM providers and other online services, a digital omnichannel
platform can allow a large and diverse organization to unify client management and
knowledge resources. For example, an omnichannel platform allows live agents to easily
log live chat interactions to a CRM from inside the live chat window, enabling field agents
to stay current on client touch points with government as they are working on a file.

20
3. Set clear customer service expectations

When improving service delivery in government, it’s crucial that citizens’ customer service
expectations align with your capabilities. The most common example of this concept in
action is one that everyone has been on the receiving end of – long wait times for services.
By establishing service standards and being transparent with clients around expected
wait times, customers can make better informed decisions around how and when they
seek assistance.
Simply telling a customer that they are next in line could send a wrong message, causing
them to incorrectly believe they may be only seconds from receiving help. When several
minutes pass and that citizen is still waiting, they’ll undoubtedly feel worse about their
experience than if accurate expectations had been set at the start.
In government service delivery, setting customer service expectations needs to also
include organizational conversations around client outcomes. With all members of the
service area on board, agents can more effectively communicate expected outcomes,
improving the customer experience. A better understanding of where citizen and
organizational expectations meet allows service agents to more effectively communicate
with external and internal stakeholders alike.
Read more: Expert commentary: Closing the CX Gap between Customer
Expectations and Business Reality

4. Show commitment to security & privacy

Public trust in technology and government are down. Only 21% of consumers trust
established global brands to keep their data secure. Because of this, a focus on secure
technology is the key to moving forward as the public sector looks for means of improving
service delivery in government. In a post-pandemic world, expectations have never been
higher for digital engagement, and pressure is on to keep improving customer service in
government, while also protecting the security and privacy of its citizens.
Digital infrastructure has frequently become a target of attackers in recent years, making
procurement of secure services more important than ever in the public sector. With this
in mind, government organizations should be mindful to work only with technology
partners that have demonstrated a commitment to data security. Some of the most
significant certifications to demonstrate compliance with security and privacy standards
include:
SOC 2 Type II: Regulated via external audit, this certification looks for controls related to
service security, availability, process integrity, confidentiality, and privacy.
ISO 27001: Recognized worldwide, this standard regulates how information security
management systems are established, implemented, maintained, and continually
improved.
PCI DSS: This information security standard regulates how organizations handle
payment processing, ensuring that credit card information is kept secure.

21
HIPAA: The HIPAA certification regulates data protection in the U.S. health industry,
stipulating how personal information is maintained and accessed.
PIPEDA: A Canadian law regulating data privacy, PIPEDA covers how private
organizations collect, use, and disclose personal information while conducting business.
Read more: Digital Transformation in Customer Service – Navigating Security
Threats

What is Network Performance?

Over the years, networks have grown larger and more complex. More users and devices
are connected than ever before. In addition, there are increased demands upon networks.
Industry reports show bandwidth usage has been growing at a rapid rate, and experts
expect bandwidth usage to increase at an even faster rate due to developments such as
SDN, public and private cloud, and gigabit Ethernet. Larger networks with greater traffic
flowing through them have also led to increased security concerns for IT departments.

While the factors that affect the performance of networks will change, one thing will not,
and that is the importance of network performance monitoring.

Most IT teams hear statements such as “The network is really slow,” “I can’t access that
on the network,” or “That application just won’t work on our network – don’t even bother”
multiple times a week.

These statements are all signs of a poor performing network.

While many employees accept poor performance as par for the course, network
performance optimization is becoming more and more critical to the success of an
organization.

Trends and Challenges

If employees or end users at your organization claim that the speed of your network is
affecting their ability to get their work done, your network is likely having a negative impact
on productivity.

This isn’t just an anecdote – there’s hard evidence to back up this claim. A
2016 study carried out in Australia and New Zealand showed that network outages cause
a productivity loss of over 50 hours per year. The three applications that had the biggest
impact on network performance were email, voice and video communication tools (UC
and VoIP), as well as office productivity applications. These are troubling results
especially considering how offices around the world rely on these applications to get basic
work tasks completed each day.

22
How to Optimize Network Performance

If you don’t invest in network monitoring and diagnostics solutions that are informed by
relevant metrics, you may miss out on valuable business opportunities. It can be hard to
conceptualize just how much a lost opportunity can cost because losses like this are not
always easily quantified. However, when enough of those losses start to add up, the
problem becomes very clear.

Suppose your customer-facing website is slow because of poor network performance.


Potential customers lose patience with the site and decide not to buy your products or
services. Or, perhaps you’re looking to move into a different market. If your network can’t
handle the demands, such a shift would place on it, your efforts will be in vain.

Network monitoring tools that measure key performance indicators can help you set
performance baselines so anytime network performance dips below a normal range, your
IT team receives an alert and can quickly employ network troubleshooting strategies to
quickly resolve the problem.

Monitoring Tools Can Help

Don’t let poor management cost your business money. Implementing the right network
performance monitoring tools to help you identify problems before they spin out of control.

Find the right solution vendor by researching the cost and benefit of several different
solutions. If this seems daunting, consider the most important assets your company
possesses and what is vital to the day-to-day functioning of your business.

Networks are a vital resource to the enterprise because they transmit crucial information
which enables us to communicate, collaborate, and accomplish tasks. For many
companies, however, poor network performance is the norm. Management isn’t always
treated as a priority and without the right tools, your organization could be risking it all.

Why are Businesses Concerned About the Performance of Their Networks?

Poor attention to network performance issues, or a lack of solutions, are a source of


frustration at many companies. Without a network that operates at a reasonable speed,
employees can’t send documents, use mission-critical applications, or even get general
tasks completed. However, despite how essential networks are, many companies simply
refuse to properly invest in the right solutions.

While it may seem mind-boggling that something so important would be swept under the
rug, there’s always a reason why companies aren’t willing to devote the time, energy, and
money to improve the performance and management of their networks.

Executives often claim it’s too expensive or that the rest of the c-suite isn’t interested in
fixing the problem. These excuses are harmful in the long run as there are serious and

23
costly consequences to neglecting factors that may be hindering the performance of your
network.

How Does Poor Network Performance Hurt Enterprises?

Although business leaders often think they’re saving money by not investing in the
proper solutions or network performance monitoring tools, the truth is they’re ultimately
hurting themselves. A poor performing network leads to low productivity, employee
disengagement, and decreased revenues. Employees don’t want to come to work if the
network hinders their ability to get work done, and if the situation persists for long enough,
good employees may end up finding a new place to work.

Network Performance is a Business Driver

When you invest in network management solutions and monitoring solutions, you’re
investing in your company’s future. Additionally, you’re committing to improving and
maintaining employee morale, which helps boost productivity, and in turn, leads to higher
profits.

It’s not too late to improve your network performance. Learn about our industry-
leading network performance monitoring solutions today!

What is network optimization?

Network optimization comprises the technologies, tools, and techniques that help
maintain, improve, or maximize performance across all network domains. These
elements are used to monitor, manage, and optimize performance metrics to help
ensure the highest levels of service for users throughout the network.
Why is optimization important?

Network optimization is important because our interconnected real-time world is


completely dependent on the reliable, secure, available, 24/7 transfer of data. And every
year, there are more and more demands being placed on networks. Our data-driven world
is undeniable. Every aspect of our digital lives rely on how efficiently the network operates,
and it’s the reason why network optimization is critical.
What are the core benefits of optimization?

There are four core benefits to optimization:


1. Optimization enables the free flow of data through the optimal usage of network
system resources.
2. Optimization tracks performance metrics, providing real-time reporting to help
network managers proactively manage the network.
3. Optimization provides analytics and predictive modelling so that network
managers can determine the impact any changes to the architecture will have on
the network before they are implemented.

24
4. All these benefits add up to the most important benefit: driving greater network
performance.
What are the top network optimization metrics?

The top business objectives driving optimization include technologies such as SD-WAN,
WiFi, big data, collaboration, and multi-cloud cloud, mobile, and edge computing.
Business leaders are demanding these technologies be implemented and are becoming
more heavily reliant on them for growth and operational efficiency. Huge volumes of data
are being generated as a result of these technologies, thereby consuming large network
bandwidths and causing greater strain the network as a whole. At the same time, business
leaders expect these technologies to work. Without exception. Period.
Top trends driving optimization:
Legacy systems are being leveraged to become more agile
Though network technologies continue to evolve, they still need to play nicely with existing
systems. It’s simply not economically feasible to rip’n’replace. In our recent survey of
NetOps professionals, they said that transforming networks away from legacy
architectures to become more agile (and less costly) is their biggest priority for 2019.
Too much complexity is straining NetOps people and resources
One of the biggest challenges that Network Operations faces are lack of time. They are
too busy troubleshooting issues across the network, usually made up of disparate
legacy architectures and multiple monitoring tools, to focus on larger strategic issues.
Network optimization offers the promise of a consolidated architecture, improved overall
network performance, and a single end-to-end view for Network Operations.
Distributed networks are causing greater security and risk
There are more devices, IoT devices, applications, cloud-based networks, virtual
networks, software-defined networks (SDNs) than ever before. And the more that are
introduced to the network, the more chances there are for security breaches. Within an
optimized network, NetOps have the ability to help minimize vulnerabilities to protect
sensitive data from infiltration and attack.
What are the common optimization metrics?

The top network optimization metrics to quantity specific aspects of the network identify
certain attributes that reflect the network’s health, such as latency and packet loss, jitter,
congestion, and bandwidth – all of which are crucial to network performance, end-user
experience, and end-user productivity.
Top network optimization metrics are:
Network Availability
This is among the biggest concerns of any network manager responsible for business-
critical or mission-critical network environments. Network availability is the percentage of
time that the network is functioning over a specific period. All network resources are
monitored for availability, including network devices, interfaces, WANs, SD-WANs,

25
services, processes, applications, and websites, among others. The optimal network
availability metric is often expressed as “nine nines”: 99.9999999%, which translates into
31.56 milliseconds of downtime per year.
Network Utilization
This is a measure of the amount of traffic on the network, showing whether a network is
busy, stable, or idle. It is calculated as a ratio based on current traffic to the peak traffic
the network can handle and is specified as a percentage. Spikes in network usage can
affect the performance of the network infrastructure on every layer, and monitoring is
required to track usage increases. By measuring inbound and outbound patterns of
bandwidth usage, network managers can see at a glance how much and where the
network is being utilized, enabling them to make informed decisions about upgrades and
maintenance.
Network Latency
“Latency” is a synonym for “delay”. Network latency is the measurement of delays that
occur in data communication, either in a one-way or round trip of a packet of data. An
indicator of network speed, usually measured in milliseconds, latency has a big effect on
user experience (think VoIP calls or video streaming in particular). Networks that
experience small delays are a low-latency networks. Those with long delays are high-
latency networks.
Network Jitter
This occurs when a stream of data is not constant, resulting in some packets of data
taking longer than others to be delivered. Jitter is a sign of an overloaded router due to
network congestion, and usually results in poor online video or voice quality.
Network Service Delivery and Service Assurance
Service delivery monitoring is the technology that enables the visualization, detection,
alerting and reporting on the status of an end-to-end IT service. It is similar to service
assurance, which is a framework of technology and processes to ensure that IT services
offered over the enterprise network meet the agreed-to service quality level (SLA) for
optimal user experience. Service delivery monitoring and service assurance occur
through the optimization of the application performance across a hybrid network
infrastructure.
What is a typical network optimization industry uses cases?
Typical network optimization uses cases span industries. Whatever the industry – real
estate, manufacturing, retail, professional services, oil & gas, or healthcare – the issues
that drive the decision to pursue a network optimization initiative are universal. The goals:
availability, flexibility, and scalability of enterprise networks – all with a view to provide
reliable, immediate, secure delivery of and access to enterprise data, applications and
services. The benefits: streamlined network management, increased security, more
effective data compliance and control, and overall cost savings in hardware, software and
Network Operations management.
Typical Optimization Initiatives
Typical optimization initiatives involve individual workstations and devices right up to the
servers, with everything in between. The ideal way to do this is by leveraging
26
technologies, that is, optimizing existing systems without acquiring additional hardware
or software. Among the actions NetOps takes to optimize network performance are:
 traffic shaping
 redundant data elimination
 data compression
 streamlining of data protocols
 buffer tuning
 quality of service (QoS) implementation
 enhanced application delivery

Why is optimization important to NetOps and network engineers?

Network technology is anything but static. It is constantly changing because the demands
upon it are changing as well. Call it what you will – business transformation, digital
transformation, network transformation – as the enterprise relies even more heavily on
the network, it’s put a lot more pressure on NetOps and network engineers.
This is why network optimization is central to the success of all network initiatives across
all network domains. It is the solution to a huge problem – that network management
teams who rely on legacy monitoring tools are only getting a partial picture of the network.
And without end-to-end visibility, they are not as effective as they could be.

Advanced network monitoring and management tools help mitigate these visibility issues
and help ensure successful network transformation implementations.
The pressure is on NetOps to drive business performance through the network
With enterprises becoming increasingly reliant on their networks for growth and
operational efficiency, the strategic IT decisions about the network have become, in
effect, strategic business decisions.

The rallying cry of “Do more with less!” in today’s business environment is very familiar to
NetOps. They’re always looking for ways to increase network flexibility and agility while
reducing costs. In our recent survey, 34% of NetOps professionals stated that improving
network agility is their highest priority for 2019. And they ranked lowering service provider
costs, Capex budgets, and Opex budgets as their next-most important goals for 2019.

Improve Network Agility


Network transformation is a reality that all NetOps teams must face
To achieve network agility and reduce costs, NetOps is moving from legacy architectures
to cloud-oriented networks. This way, they can offload CapEx and OpEx costs to
providers that can host, scale, and deploy applications and services only when needed.
As well, they’re using Software Defined(SD-WAN) to reduce overall Wide Area
Newtwork(WAN) costs while speeding up provisioning and deployment for branch offices.
These kinds of virtualized environments are transformational, but they are still occurring
in environments with disparate legacy networks that each have their own toolsets for
monitoring and management. These siloed systems provide limited visibility and poor

27
interoperability outside of their own infrastructures, leaving NetOps with significant blind
spots.
NetOps teams are stretched in every direction
Time is not on NetOps side. In our survey, we found up to 42% of NetOps professionals
have difficulties troubleshooting issues across the whole network as a result of disparate
legacy architectures.
Network professionals have challenging network performance issues spanning several
network domains (38%) and hampered by poor performance visibility across multiple
network fabrics (35%). With these blind spots NetOps is unable to improve network
performance, making it difficult to deliver on network transformation successfully.
NetOps challenges – performance blind spots
 42% have difficulty troubleshooting performance issues across the entire network
 35% have poor visibility into performance across the entire network
 35% have poor end-to-end performance monitoring
 36% cannot proactively identify network performance issues
The network is the business – and NetOps needs to ensure it’s performing
Poor network performance means poor business performance. It’s a direct correlation
that NetOps professionals understand, and are acting on. Our survey found that 45% are
looking for ways to improve application performance across the entire network; almost
40% seek to improve remote site network performance; and 37% respectively seek to
improve wifi/wireless performance.

Highest Priority Goal: Improve application performance across the entire network
Resolving network performance issues starts by gaining visibility and insight into what
and where issues are occurring. Our survey found that the need to eliminate network
performance blind spots through end-to-end network visibility is one of the key network
operations initiatives for 2019. In fact, 39% of NetOps pros are addressing this need by
consolidating legacy network monitoring tools. At the same time, 37% seek better network
performance management solutions, while another 34% seek to automate or upgrade
network configuration change management tools.

What network optimization software tools are available?


Flow visualization and analytics
These network performance monitoring and diagnostics tools provide complete network
administration, enabling NetOps to locate and troubleshoot problems, perform real-time
analytics of end-to-end network traffic, and control of routers and switches.
Through visualization of traffic across the entire network down to a single flow – from the
system level to the device level to the interface level — NetOps has the ability to perform
proactive management and maintenance with full historical reporting for reference.

28
Packet visualization and analytics
These appliances extend network monitoring and visibility for troubleshooting of
application-level issues at remote sites, branch offices, WAN links, and data centers.
Through packet capture, NetOps can do real-time and post-event analytics, identifying
network performance issues using visualizations. As a result, they can keep business-
critical services like VoIP up and running, perform analyses of e-commerce transactions,
keep alerted to security attacks, and identify issues concerning latency, communications
quality and capacity.
SD-WAN visualization and reporting

These tools provide complete visibility of application performance across an SD-WAN


topology, enabling NetOps to monitor devices and service-level performance across
multiple domains. They simplify policy verification, correlate multiple data sets from the
edge routers, isolate specific applications, and enable NetOps to drill down into specific
parameters and policies to determine SLA performance. With virtual imagery of tunnels,
VPNs, VRFs, interfaces, and more, NetOps can perform deep analysis to identify errors,
misconfigurations and mistakes, all in one view.

What Is Network Optimization?

Network optimization is the iterative process of improving the performance, reliability, and
resilience of your IT network. Different network optimization techniques, tools, and
architectures can be leveraged to optimize performance within the bounds determined by
your organization’s IT resources.
As a network administrator, you understand that your goal isn’t merely to avoid
“downtime,” but to deliver an efficient experience for users. Given your physical
infrastructure, and the budget your business has allotted for your network operations, you
need to find the ideal balance of performance and expenditure.

Network Optimization Metrics & KPIs

Although network traffic optimization is often viewed as a complex and daunting task, the
main goal is actually very simple: to improve network performance. While there are many
ways to achieve this, the most important factor is which metrics you track.

Packet Loss

Packet loss is one of the most important metrics for network optimization. Loss can
degrade network performance in many ways. This will result in slower response times,
reduced bandwidth, and increased latency.
There are several factors that can cause packet loss, including hardware failure, software
issues, and congestion. To optimize your network, you need to identify the source of the
packet loss and take steps to mitigate it. Common resolutions include hardware refreshes
or upgrades, expanding available bandwidth, and QoS prioritization. Fortunately for

29
network admins like yourself, network fault management software platforms can help you
pinpoint precise hardware failures that may be causing packet loss within your network.

Latency
Latency is another important metric for network optimization. It measures the time it takes
for a packet to travel from its source to its destination. Latency can be affected by many
factors, including network, storage, or server hardware failures, software faults, and sub-
optimal networking configuration.

High latency can lead to issues including dropped connections, choppy audio and video,
and delayed response times. To reduce latency, you need to identify the source of the
issue and take steps to mitigate it. If hardware failures are behind your latency
issues, network topology mapping software can help you pinpoint the device-level
geographic location of your culprit.

Bandwidth
Bandwidth is another important metric for network traffic optimization. It measures the
amount of data that can be transferred between two points in a period of time. Bandwidth
can be affected by hardware, software, and networking configuration.
High bandwidth can lead to issues like increased latency, dropped connections, and poor
audio and video quality over your network. To reduce bandwidth utilization, you need to
identify the source of the issue and take steps to mitigate it. Common causes of enterprise
bandwidth strain include malware, sporadic application updates, and even content
streaming/social media applications that are resource-heavy but not business-critical.

Availability
Availability is the final metric for network traffic optimization. It measures the uptime of
your network, or the amount of time that it is available for use. Availability can be affected
by several factors, including software, hardware, and the configuration of your network.
It’s important for your workforce and any external end users to have uninterrupted access
to the resources and applications they need. To realize high availability, you must identify
the source of your outage and put redundancies or prioritizations in place to maintain
uptime.

5 Network Optimization Techniques and Best Practices

Perhaps the most important benefit of optimizing your network is avoiding costly
downtime, which is estimated to burn $5,600 per minute.
Different techniques can be used to optimize a network. Some of the most popular
include:

1. Load Balancing
This technique helps distribute traffic evenly across a network, which can help prevent
congestion and ensure optimal performance. This ensures that no one server is

30
overutilized during high traffic periods and is instead smoothed among your broader
server pool.

2. QoS Prioritization
QoS prioritization allows you to prioritize certain types of packets sent across your
network. This includes prioritizing mission-critical packets like video conferencing VoIP
packets over server backups that don’t need to happen in real-time. This can help ensure
that critical data is always transferred smoothly and efficiently.

3. Payload Compression
Payload compression reduces the size of data packets, which can help improve
bandwidth and avoid congestion.

4. Leveraging an SD-WAN
Software-defined wide area networks (SD-WANs) can help improve network
performance by dynamically routing traffic. SD-WAN leverages a virtualized, centralized
control function to direct your network traffic across a wide area network. This eliminates
the need to backhaul data to a central source if your organization has multiple campuses.

5. Improved Hardware
In some cases, simply upgrading your network hardware can be enough to improve
performance. If an old switch or router is causing frequent bottlenecks within your
network, an upgrade could be all it takes to restore order.

Two Popular Network Optimization Solutions


Why haven’t you already optimized your network? The reason is simple: In today’s
economic atmosphere, your IT team is likely suffering from having the wrong tools, or not
having the wrong people (or too few of the right people). Resources are not limitless, and
tools don’t always fit your business needs.

IT team is exhausted – when are you finally justified in introducing network


monitoring software into your IT operations?

Luckily, there are several types of solutions that can help your team with network
optimization. This includes network optimization tools that empower you to in-house your
optimization efforts, and network optimization managed services that do the heavy lifting
for you.

Network Performance Monitoring Software

Network performance monitoring software is a type of software that helps you track the
performance of your network. This type of software enables you to optimize your
enterprise network across devices supplied by a host of different vendors.

Enterprise network monitoring software can automate network device discovery and
provide intuitive workflows that make it easy to identify when faults arise. Responsive

31
dashboards let you quickly assess your network health or drill down to the device level to
quickly and fix network problems.
Entuity Software™ supports network infrastructure monitoring for thousands of devices
out of the box across hundreds of vendors. And, Entuity has been consistently recognized
as leader in the Network Monitoring & Management category by G2.

Network Management as a Service

Network management as a service (NMaaS) provides managed engineer assistance for


optimizing your network. Network management services can help you with a variety of
tasks, including monitoring performance, identifying issues, and implementing solutions.
NMaaS providers can help streamline your network management processes by taking
care of hybrid environments, supporting new networking technologies, and ensuring
outstanding end-user experiences. Some of the best network infrastructure management
providers, like ParkView Managed Services™, are backed by an Enterprise Operations
Center (EOC) that ensures a smooth deployment with a tailored onboarding plan that
includes custom set-ups for network device management, custom dashboards and
reports that cater to your organizational goals.

Choosing the Right Way to Optimize Network Performance


There are many different options available for network optimization. The right solution for
you will depend on your specific needs.
If you’re looking for a solution that will enable your networking team to monitor your own
network, then network performance monitoring software may be a good option. If you’re
interested in having an expert team optimize your network so you can focus on more
strategic initiatives, then network management as a service may be a better choice.
As the leading Global Data Center and Networking Optimization Firm, Park Place
Technologies can help your team with nearly any networking hurdle you encounter.
Contact us today to learn how Entuity™ Software and ParkView Managed Services™
can support your team!

32
Quality of experience (QoE) in telecommunications services

Many would agree that digital communication is one of the driving forces for the growth
of businesses and societies, and it is quite normal that everyone expects the best quality
in the telecommunication services that they use in their everyday digital interaction. In
the harsh market reality of today, the customer perspective will surely tell us that if a
service fails to provide the highest quality, it will be jilted sooner or later. At the other
end of the line, service providers may not be fully aware either of their customer’s needs
nor about the tools that may be used to prevent their dissatisfaction. To close this gap,
the notion of the Quality of Service has been introduced as a measure for getting into
the boots of the users and ultimately, offering better services and solutions.

What’s in Quality of Experience (QoE)?


But the abundance of overlapping concepts and their puzzling abbreviations
surrounding the concept of quality measures doesn’t seem to facilitate the task of
understanding the importance of Quality of Experience in telecommunication services.
You might have already come across such terms as UX, QoS, QoX or QoE; while all of
these are useful for determining both qualitative and quantitative parameters of
interaction between the user and a system or service, there is a recent consensus
among the experts that key importance should be assigned to the idea of Quality of
Experience (abbreviated either as QoE or QoX). Let’s follow this concept to see the
source of its popularity and the rewards behind it.

33
Being by far the most comprehensive method for researching customer satisfaction,
QoE has been widely adopted across many disparate consumer-related industry
environments. But since its inception, the Quality of Experience paradigm has been
applied mainly in the field of telecommunications, and this is where its research
instruments are at its sharpest and its measures are most accurate. Being in a way
similar to the notion of Quality of Service (QoS) that intends to measure, improve and
guarantee software and hardware characteristics, QoE in telecommunications is far
more comprehensive and wider in scope.
Quality of Experience (QoE) definition
Boiling it down to a simple and matter-of-factly definition, it can be said that Quality of
Experience is the measure of the overall level of satisfaction of a user with a service
from the user’s perspective. But next comes the hard part. What true QoE really intends
to measure is not only the objective parameters of system performance (like QoS does).
It aims at embracing the subjective experiences of the service user, with all their
complexities and human-dependent variables like the physical, temporal and even
social and economic factors.

With that said, is this at all possible to measure accurately someone’s subjective
feelings about a thing that they are using at the time of using it and to draw useful
conclusions from the gathered data? The straightforward answer is: yes (imagine that
it’s a ‘no’; the article would have to end here!), but there are ‘buts’ to it. The task of
measuring QoE in telecommunication services is highly complex and cannot be realized
successfully without the right tools in place to collect the right data for analysis.
Moreover, you have to be aware of which service parameters are essential for the user
to enjoy the service. Therefore, the essence of determining QoE in a particular case

34
depends not so much on the volume or scope of what is to be measured and
transformed into metrics. It is rather about knowing which of many service parameters
are essential factors in user satisfaction and about measuring them from a perspective
as close to the user’s perception as possible.
How to ensure Quality of Experience (QoE)
Over the years, the awareness of how QoE helps to improve user satisfaction and,
ultimately, user loyalty, has been growing among the telecommunications operators. As
a result, there have emerged solutions especially dedicated to monitoring Quality of
Experience parameters in telecommunication services. But since Quality of Experience
aims to encompass every factor that contributes to the user’s perception of the service
quality, including human, system and context-related aspects, it is extremely
challenging to produce reliable and unbiased conclusions on their basis.
Therefore, out of all the factors influencing the results of a QoE check, the network and
performance-related parameters probably are the most relevant and objective, as long
as they are measured. Among the globally-recognized leaders in this field is
AVSystem's UMP multiprotocol platform used by telecommunications operators to
manage, control and monitor the performance of their network infrastructure.

Offering a comprehensive QoE solution, UMP helps to measure the end-to-end


performance of networks and devices using real-time diagnostic tools and flexible
monitoring mechanisms. These can be suited to practically any QoE measurement
scenario, allowing the operators to react immediately to any arising issues thanks to
presenting the performance parameters from the user’s perspective.
But UMP is not only about QoE measurements and metrics. Another great feature in
the context of QoE is the smart monitoring and managing of Wi-Fi services. Thanks to
the automated Smart Workflows engine, telco operators can save their time in manually

35
fixing repetitive network issues of their customers, greatly influencing their experience
with the system as well as with the operator’s customer care services.
Quality of Experience (QoE) matters
In telecommunication services, the subscriber loyalty depends on the end-user QoE
level. Quality of Experience goes down deeper than Quality of Service in ensuring that
the objective service performance parameters, as well as subjective user impressions,
are kept on a high-level. Thus, service providers need to know their users’ QoE in real-
time to be able to improve their services on-the-fly. With UMP, user experience with
telco services can be ensured end-to-end.

Overview

Forecasting is a necessary part of planning. The Future cannot be predicted with


certainty, but the use of statistical data analysis helps prepare for what lies ahead. Since
telecommunications is a supporting department in many organizations generally its
forecasting depends on the entire organizational planning and forecasting. It is important
for telecommunications managers to understand the organization forecast, or if possible,
to be a part of the forecasting team for the organization. Understanding the organization
plan and the direction it wants to move helps telecommunications mangers plan
efficiently. While some equipment is easy to get on a short-term notice, many components
and systems take time (e.g.: a new PBX, trunks, increased voicemail ports). It is better to
predict these increases and arrange to get the equipment quickly when the needed.

Reasons to Forecast:

Telecommunications managers spend most of their time solving problems or fighting


fires. Managers can plan efficiently to project future demand and eliminate many of
these daily fire drills by:
1. Making sure that the number of circuits and trunks meets demand; controlling the
cost of connect vs. reconnect.
2. Arranging the fastest way to get extra hardware capacity when needed
3. Anticipating staffing demands such as the number of call center agents and PBX
administrator that will be needed
4. Predicting physical space, which is often the hardest commodity to obtain
Calculating telecom budgets

Steps to Complete the Forecast

Forecasting is very similar to any other business project in an organization. The three
fundamental steps to completing a forecast for telecommunications are determining a
pattern, fining a source to gather data and using the best method to manipulating the data
for the accurate outcomes.

36
Determining the Pattern

Collecting statistical data about your organizations telecommunications services is the


best way to predict the future needs for the organization. You can often denote patterns
in the data from which you can predict the needs in the time to come. The four important
patterns that underline forecasts are:
1. Trends (e.g. an organization closed on the weekends might see an influx of calls on
Mondays)
2. Seasonality (e.g. weather, Christmas and the New Year)
3. Cyclically (e.g. rise or fall of economy over several years)
4. Randomness (e.g. riot and flood)

Finding a Source of Data

There are many ways within telecommunications systems and services to find statistical
reports and data for forecasting. Most equipment (such as the PBX) is computerized and
can easily statistics. Sources of telecommunications data includes:

1.ACD (Automatic Call Distributor): Management information systems, which produce


reports that range from quarter hours to weeks
2. Switching System (PBX): Collect usage information on number of hour and call
seconds for circuit groups
3.Call Accounting System (part of the PBX)
4. Data can be sorted and combined in variety of ways
5. Collects data a month at a time
6.Data doesn’t show uncompleted calls

Using the Most Feasible Method

There are several methods to forecasting using the data collected from one of the sources
described above. The method used depends on the type of information that you have and
the type of information that you are searching for. The method also depends on the
degree of accuracy that is required. Much of the data can be collected in a spreadsheet,
such as Excel, and can be analysed by one of the following forecasting methods:

1. Time Series: Fits a trend to plotted data to show upward or downward movement. The
disadvantage is that it predicts the future on history.

2. Moving Averages: Smoothing out the variations by averaging the data before and
after a given point. The disadvantage of this method is that it gives the same value to all
observations. In an effective analysis recent observation should have more value than
the older ones.

3. Exponential Smoothing: Much like moving averages, but creating an exponential


equation to find the average from less evenly weighted data

37
4. Regression Analysis:

a) Uses one truth to predict other variables


b) Fact or forecast A is used to predict B
c) The correlation coefficient is the denotation of how closely data is related (0.0 to 1.0)
d) Another part of the organization must prepare a forecast for telecommunications
manager to use that forecast for telecommunications needs

5. Judgmental Forecasting: Changing the forecast as you go along to keep from getting
too far off track

6. The Delphi Method:

This forecasting method is better than simple guess but it is very risky. Data is gathered
through following steps:
1. Asking the opinion of the various department leaders about the direction that the
organization going
2. Weighing their answers considering knowledge, optimism or pessimism
3. Averaging those weighted answers

38
Social Media Analytics
An overview of social media analytics

Practitioners and analysts alike know social media by its many websites and channels:
Facebook, YouTube, Instagram, Twitter, LinkedIn, Reddit and many others.

Social media analytics is the ability to gather and find meaning in data gathered from
social channels to support business decisions — and measure the performance of
actions based on those decisions through social media.

Social media analytics is broader than metrics such as likes, follows, retweets,
previews, clicks, and impressions gathered from individual channels. It also differs from
reporting offered by services that support marketing campaigns such as LinkedIn or
Google Analytics.

Social media analytics uses specifically designed software platforms that work similarly
to web search tools. Data about keywords or topics is retrieved through search queries
or web ‘crawlers’ that span channels. Fragments of text are returned, loaded into a
database, categorized and analysed to derive meaningful insights.

Social media analytics includes the concept of social listening. Listening is monitoring
social channels for problems and opportunities. Social media analytics tools typically
incorporate listening into more comprehensive reporting that involves listening and
performance.

Why is social media analytics important?

IBM points out that with the prevalence of social media: “News of a great product can
spread like wildfire. And news about a bad product — or a bad experience with a
customer service rep — can spread just as quickly. Consumers are now holding
organizations to account for their brand promises and sharing their experiences with
friends, co-workers and the public at large.”

Social media analytics helps companies address these experiences and use them to:

 Spot trends related to offerings and brands


 Understand conversations — what is being said and how it is being received
 Derive customer sentiment towards products and services
 Gauge response to social media and other communications
 Identify high-value features for a product or service
 Uncover what competitors are saying and its effectiveness
 Map how third-party partners and channels may affect performance

39
These insights can be used to not only make tactical adjustments, like addressing an
angry tweet, they can help drive strategic decisions. In fact, IBM finds social media
analytics is now “being brought into the core discussions about how businesses develop
their strategies.”

These strategies affect a range of business activity:

 Product development - Analyzing an aggregate of Facebook posts, tweets and


Amazon product reviews can deliver a clearer picture of customer pain points,
shifting needs and desired features. Trends can be identified and tracked to
shape the management of existing product lines as well as guide new product
development.
 Customer experience - An IBM study discovered “organizations are evolving
from product-led to experience-led businesses.” Behavioral analysis can be
applied across social channels to capitalize on micro-moments to delight
customers and increase loyalty and lifetime value.
Branding - social media may be the world’s largest focus group. Natural
language processing and sentiment analysis can continually monitor positive or
negative expectations to maintain brand health, refine positioning and develop
new brand attributes.
 Competitive Analysis - Understanding what competitors are doing and how
customers are responding is always critical. For example, a competitor may
indicate that they are foregoing a niche market, creating an opportunity. Or a
spike in positive mentions for a new product can alert organizations to market
disruptors.
 Operational efficiency – Deep analysis of social media can help organizations
improve how they gauge demand. Retailers and others can use that information
to manage inventory and suppliers, reduce costs and optimize resources
.
Key capabilities of effective social media analytics

The first step for effective social media analytics is developing a goal. Goals can range
from increasing revenue to pinpointing service issues. From there, topics or keywords
can be selected and parameters such as date range can be set. Sources also need to
be specified — responses to YouTube videos, Facebook conversations, Twitter
arguments, Amazon product reviews, comments from news sites. It is important to
select sources pertinent to a given product, service or brand.

Typically, a data set will be established to support the goals, topics, parameters and
sources. Data is retrieved, analyzed and reported through visualizations that make it
easier to understand and manipulate.

These steps are typical of a general social media analytics approach that can be made
more effective by capabilities found in social media analytics platforms.

40
 Natural language processing and machine learning technologies identify
entities and relationships in unstructured data — information not pre-formatted to
work with data analytics. Virtually all social media content is unstructured. These
technologies are critical to deriving meaningful insights.
 Segmentation is a fundamental need in social media analytics. It categorizes
social media participants by geography, age, gender, marital status, parental
status and other demographics. It can help identify influencers in those
categories. Messages, initiatives and responses can be better tuned and
targeted by understanding who is interacting on key topics.
 Behavior analysis is used to understand the concerns of social media
participants by assigning behavioral types such as user, recommender,
prospective user and detractor. Understanding these roles helps develop
targeted messages and responses to meet, change or deflect their perceptions.
 Sentiment analysis measures the tone and intent of social media comments. It
typically involves natural language processing technologies to help understand
entities and relationships to reveal positive, negative, neutral or ambivalent
attributes.
 Share of voice analyzes prevalence and intensity in conversations regarding
brand, products, services, reputation and more. It helps determine key issues
and important topics. It also helps classify discussions as positive, negative,
neutral or ambivalent.
 Clustering analysis can uncover hidden conversations and unexpected insights.
It makes associations between keywords or phrases that appear together
frequently and derives new topics, issues and opportunities. The people that
make baking soda, for example, discovered new uses and opportunities using
clustering analysis.
 Dashboards and visualization charts, graphs, tables and other presentation
tools summarize and share social media analytics findings — a critical capability
for communicating and acting on what has been learned. They also enable users
to grasp meaning and insights more quickly and look deeper into specific findings
without advanced technical skills.

Social media offers a huge pool of consumers ripe for brand communication targeted
toward these interests – but consumers resent interruptions. And this is particularly true
when someone is trying to sell them something! So, this is where social media analytics
comes into play. We’ll break down just what this crucial business tool is; what it isn’t;
why you need to use it; and, how!
First, social media analytics isn’t about brands. It’s about people sharing their lives with
others they know based on common interests. Social is a wonderful place for
consumers and brands to connect, as long as they remember one thing: social media
may provide your brand’s first and last impression, so both need to be good ones. Many
businesses adopt a brand-centric focus when starting out with on their data analytics
journey, and that can be dangerous. Let’s see why!

41
What is Social Media Analytics?

Techopedia defines social media analytics as:

“Social media analytics (SMA) refers to the approach of collecting data from social
media sites and blogs and evaluating that data to make business decisions. This
process goes beyond the usual monitoring or a basic analysis of retweets or ‘likes’ to
develop an in-depth idea of the social consumer.”
This is a pretty apt description, though we’d like to clarify that “social media sites”
encompasses not just Facebook, Twitter, and the like, but forums and review sites as
well as blogs and news outlets. Really, it’s anywhere that consumers can share their
beliefs, opinions and feelings online.
Just as buzzwords lose meaning over time, many brands lose sight of the value of
social media analytics because at first glance social data comes with a lot of noise.
Nobody has time to sort through results that include spam, bots, and trolls to get to the
good stuff.
Additionally, brands often make the mistake of running a social media analysis on a
topic once and then call it good. Online is always in a state of flux, so there is an
ongoing relationship with the data in social media analytics to account for fluctuations
inherent in the medium.
The ability to cut through the online noise in pursuit of actionable market, competitive
and consumer intelligence, coupled with consistent monitoring to track conversational
fluctuations over time is the mark of effective social media analytics.
Quite simply, when you have state of the art tools, social media analytics becomes a
treasure trove of consumer insights you can’t find anywhere else. Without them
however, social media presents a guessing game in an ever-changing slog of
information without cohesive insight.
Building on this, we’d extend the definition above to say social media analytics is
a collection of data unearthed via multiple techniques from multiple sources versus a
single tool in and of itself.
To clarify, let’s run through some terms often confused with social media analytics.
So-Called Synonyms That Aren’t
If social media analytics is a destination, what tools contribute to the journey? And what
are their distinctions.

Social Media Intelligence is the closest term-cousin to social media analytics. Social
intelligence represents the stack of technology solutions and methods used to monitor
social media, including social conversations and emerging trends. This intelligence is
then analysed and used to create meaningful content and make business decisions
across many disciplines.

Social Media Listening is one of the terms most often confused with social media
analytics. But social listening applies to one specific aspect of social media analytics:
Learning about your audience.

42
The goal here is to uncover what they love, hate, and love to hate – as opposed to any
assumptions you may have. It’s about getting to know them as people, not just
prospects.
For instance, if you want to know what people in Boston have to say about pizza, you
can find out using a tool like NetBase Pro. From there, you can look for additional
common ground to create audience segments to make your interactions more personal.

Social Media Monitoring is the second term most often confused for social media
analytics. It’s also thought to be synonymous with social listening, but the two are very
different.
Social monitoring focuses on following social audiences to be alerted to spikes in
activity that present either an opportunity you wouldn’t want to miss, or a potential
disaster you want to avoid. It’s about seeing posts like this in time to respond and avoid
a viral crisis:

Social Competitive Analysis is the process of investigating competitors of your brand


and their audience. Because social media is such a transparent medium, social media
analytics tools can be applied to brands beyond your own.
This gives you the advantage of seeing how they serve their customers, what
consumers love or hate about them and what new products or services they’re offering.

43
This information allows you to see what your shared audience gets excited about, so
you can capitalize on fresh ideas you might never have had yourself. Additionally, it can
save the day when things go wrong, or save your own budget by learning from
competitors’ mistakes.
And as consumer attitudes are never static, brands can also monitor how other brands
are handling the social climate to adjust if things are hitting close to home. Earlier this
summer Quaker Oats retired its longstanding Aunt Jemima brand out of concern for
racial impacts to consumers. Others are paying attention.

Image Analytics is a new feature made possible by the evolution of social media
analytics technology. Image analytics levels up text analysis by identifying scenes, facial
expressions, geographical locations, brand logos and more in social images. This is
especially useful when a brand is pictured, but not explicitly mentioned in the text.
As social users become increasingly visual, the inability to perform image analytics
becomes a deal-breaker when researching social media analytics tools. Basically, if
you’re social media analytics tool isn’t picking up images where your brand is pictured,
but not explicitly mentioned, then you’re missing out on a lot of the conversation.

44
And to really make sure you’re not missing a thing, it needs to not only capture full
logos, but altered, partial or reversed brand mentions as well – like the reversed and
cut-off PetSmart logo below.

Social Media Sentiment Analysis


Social sentiment is the tie-in that applies to all facets of your social media analytics.
Without it, you don’t have any way of gauging why you’ve suddenly got 500K more
“likes” or shares than usual. What if an uptick in activity isn’t a good thing? The only way
to know is through sentiment analysis.
This layer of social media analytics uses Natural Language Processing (NLP) to
understand whether social conversations are positive or negative, and to measure the
strength of those emotions. This helps you triage responses so you don’t waste energy
on posts that don’t matter, while ignoring posts that do.

45
Emotions when talking about dogs on social – the “love” is strong
Customer Experience Analytics combines social listening insights with Voice of the
Customer (VoC) verbatims like surveys, reviews, website feedback, chat messages,
market research, and data from internal systems like call centre, help centre, and web
support collected via CRM tools.
This additional data can be brought into your social media analytics to give you a
comprehensive understanding of your customers across all touchpoints.
And speaking of understanding consumers, Quid Social is the social media analytics
tool that excels in helping brands do just that.

Quid Social Ups the Ante in Social Media Analytics


Quid Social melds seamlessly with your NetBase social media analysis and spreads it
out like a map, offering contextualized social insights at a glance. In other words, it’s
your social media topic – visualized.
This symbiotic relationship is great because using disparate data analysis tools or
sources is an unnecessary headache that you don’t have to deal in gaining actionable
intelligence from your data.
If your tools are clunky, cumbersome or tiring, then chances are you’ll miss something
in the gaps between tools, or from sheer frustration. Quid Social solves the problems
that users face when stitching their social media analysis together from disconnected
sources by offering a one-and-done solution. In other words, it’s a next-level social
media analytics tool that synthesizes your data to establish a cohesive and easily
understandable window into the online narrative.
In the same manner that Quid Pro allows users to parse company, patent, or news and
blogs datasets, Quid Social uses the same interface to deep dive into any social media
topic to extract insights from the returned data to inform your brand’s decision makers.
This is accomplished through next generation artificial intelligence (AI) driven social
media datasets that provide a 360-degree contextualized view of the social narrative on
any topic. And no matter how niche your topic may be, someone is out there talking

46
about it online. Quid Social mines the depths of all social media platforms, consumer
reviews, forums and much more – ensuring you capture it all.
And not having to transfer your data from one tool to another saves time and energy,
which translates directly to your bottom line. It also offers in depth social coverage that
enables Quid users to make smarter, faster data driven decisions for their business,
bypassing the bloat of traditional social media analytic tools.
Quid Social allows users to not only analyze social conversations, but
discover emerging trends and themes, parse out key opinion leader (KOL) narratives,
analyze and monitor competitors and evaluate social media influencer performance –
just to name a few. The insights gained in these areas can boost your balance sheet by
way of the speed in which brands are able to make strategic business decisions;
allowing brands to pivot to avoid pitfalls, spot market white space and stay a step ahead
of the competition.
Additionally, Quid Social visualizes the main drivers of social conversations allowing you
to see the interconnectivity between adjacent social sub-conversations. This
visualization of the online narratives allows users to quickly understand the angle from
which target audiences are talking about a topic or issue.

Since one of the underlying principles of social media analysis is to discover and meet
consumer need, this ability to see emergent conversations in response to social or
market stimuli allows brands to move quickly and grow share of voice before other
brands have had a chance to make a move.
And all of this serves the additional bonus of growing positive brand perception within
the social media coverage. They say that the early bird gets the worm and that applies
here; as first-movers have the opportunity to design, deploy and steer the narrative. At
that point, the competition is playing catch-up.
Meeting the needs of consumers quickly translates directly to positive social sentiment
built on consumer love. And since people love to share their brand experiences on
social media, brands can directly track these changes as they apply to their own social
media metrics, and in how they relate to the competition.
Quid Social not only facilitates real-time analyses on any topic, but also the ability to
continually monitor brand-relevant conversations; which gives brands further intel into

47
where their messaging needs to be tweaked to have the most positive impact. Having
your social media analytics under one roof allows you to quickly evaluate your
campaign messaging to see what’s resonating and what’s not. At the speed of social
these days, the ability to monitor, adjust and implement your brand messaging on the fly
is a game-changer for your social media marketing team.
Additionally, these insights can be used to identify competitive areas of focus for
innovation and R&D. They also offer keen intel into how consumers are feeling or
reacting to market trends, innovations and competitive products relevant to your
industry.
Quid Social also covers the width and depth of any topic with social media coverage
that spans over 200 countries, mining major and minor social media input channels for
27 months of history and data going forward. It’s comprehensive social media coverage
for historic, and real-time analysis that digs into any online conversation – whenever
and wherever you need it to.

Below are a few of the social media insights and analytic functionality found in Quid
Social:
 Powerful network visualizations to discover emerging trends, topics and
patterns in data
 Scatterplot, bar graph, histograph and timeline views
 Optimized clustering for short text with auto-naming and summaries of
clusters
 Document and aspect level sentiments
 People, company and location entity feedback
 Social engagement counts
 Author and demographic information
 Extensive filter and tagging capabilities
 Color-by options for customized data visualization
 Customizable x/y axis parameters for extensive graphing insights

Simply put, the ways in which your brand can twist, extract and display your social
media topics are exhaustive and present a uniquely visual approach to social media
analytics.

48
But mining the social media conversation for market, competitive and consumer
intelligence is still only half the battle. What you do with that information takes you the
rest of the way.
Approach Is Everything
The best investment you can make is in social media analytics tools that bring all of the
above functionality into one place. This gives you a peek behind the curtain – and
you’re smart to make it about looking and listening and learning, not pushing your
agenda.
Think of it as having a VIP ticket to a show – it doesn’t get you on stage singing with the
stars unless you build a relationship with them over time. Once they realize you care
enough to come to every show you just might get pulled up to join them.
This is the best way to inspire engagement between your brand and your audience.
Popeye’s offers a great example here. Their marketing has been customer focused all
through the pandemic. And it’s posts like these that have endeared consumers to them
throughout the crisis.

49
That’s how you make friends on social media.
Use Cases for Social Media Analytics
Of course, it’s not just about making friends or engaging your audience – though those
are important marketing endeavours. There’s a lot of avenues to be explored through
the insights that the data provides. It’s like peeling back an orange to discover the
segmented fruit within.
In other words, the insights found through social media analytics can power every part
of brand operations. Here are some examples:
Increase Customer Acquisition
Your customers are your brand’s lifeblood. Carefully managing their journey from early
awareness to established customer through social media analytics is vital for retention,
and your brand’s long-term health.
Consistently engaging with your consumers is critical, as is developing a track record of
being there for them with fresh innovation when new needs arise.
Case in point, Activision has seen their brand grow by delivering what they know their
audience wanted. Their Overwatch League netted more than 10M views in its first
week, and more than 200K per session.
Recently, Amtrak saw opportunity during the pandemic to make sweeping changes,
delivering tools and experiences to make their customers feel safe as they traveled the
rails.
Protect Brand Health
A brand is the collective whole of all the touchpoints and interactions consumers have
with a brand, in addition to the messaging coming directly from the company. Ultimately,
the consumer holds the keys to brand perception with brands constantly striving to
influence positive consumer sentiment. Brand perception affects many things, with the
biggest impact being your balance sheet.

50
Smart brands make moves based on social media analytics to push consumer
sentiment into the green and bolstering brand health in the process.
Like when the COVID-19 pandemic broke out, Chick-fil-A responded immediately by
donating $10.8 million to coronavirus relief efforts. And when social unrest broke out
earlier this year, they were quick on the ball to reach out on social media to their
customers, letting them know they care.
It’s quick and focused action like this that captures consumer love every time.
Lower Customer Care Costs
Customer care takes dedicated attention, and these days customer care is an ‘always
on’ situation. Consumers have no hesitation reaching out to brands when issues arise,
and they expect answers. Consistent social media analytics helps brands to put the
puzzle pieces of consumer need together to inform innovations to address frequent
issues in the most cost-effective manner.
The Westin circumvented fitness amenity complaints by answering consumer “wishes.”
They provided “well-being” experts to guide their fitness experiences while staying at
the hotel, and also signed a deal with Peloton to offer virtual group cycling to their
guests.
Maximize Product Launches
Social media analytics helps brands get in on emerging trends by informing them about
products and services that consumers want. Additionally, the actionable insights
produced help pinpoint market opportunities thereby minimizing risks to ensure your
product launch is a success.
For instance, check out how Ugg for Men made the most of their new line by gifting
some slippers to the right influencers, reaching more than 3 million consumers.
Boost Campaign Performance
Social media analytics allows brands to most effectively learn what their audience cares
about and what influences their purchasing decisions. These insights allow marketing
departments to craft more personalized and relevant marketing experiences. The
opportunities here for brands are enormous with the additional benefit of real-time
feedback allowing for adjustment mid-campaign.
The ways in which brands put their social media analytics intel to work are only limited
by creativity. For example, by smartly using influencers, iHeartRadio generated huge
engagement for the iHeartRadio Awards and nominated artists.
By creating thoughtful and engaging marketing initiatives, brands can build the
emotional customer connections that boost campaign performance – just ask the city of
Las Vegas.
Improve Crisis Management
The ways in which social media analytics can guide brands when crisis hits are worth
the price of admission alone in the costs saved by speed of reaction. The severity of the
crisis and the length of time that it languishes unmitigated, or worse unseen, can bring
critical consequences to brands that can last for years. Their sudden nature points to
the necessity of social media analytics in helping to round out your crisis management
response protocols.
Last year when Zion Williamson blew out one of his Nike’s in the Duke-UNC game on
national television, it stood to shake up Nike if they didn’t get ahead of the online

51
narrative fast. Everybody saw it in real-time and they took to social media like wildfire.
Luckily, Nike got in front of it quickly, and successfully steered the conversation.
Also, check out how James Madison University uses Social Monitoring to understand
public misperception and gauge when, if and how to respond to potential crises –
amongst other things.
Invest Wisely and Reap the Returns
Many tools offer some of the features listed above, and if you’re on a budget, starting
there is better than ignoring social media analytics altogether. Ultimately though, you
want to invest in a suite of tools that does all of the above, with a commitment to
innovate when the next technological breakthrough happens. And a history of doing so
previously.
The more data you have access to, the better your understanding of your audience, and
the better you can serve them as they wish to be served. That’s what brings them back
for more. And that’s what social media analytics does for brands.
This has been part 1 of our Social Media Analytics Guide for 2020; designed to keep
you in-the-know on the tools, metrics and skills necessary to compete in an increasingly
global arena.

Why is Sentiment Analysis Important?


Friendster, MySpace, and even Facebook are the early frontiers of social media back
before its marketing power was fully realized. Its simple purpose back then was to stay
in touch with family or friends. Today, it’s a maze of consumer opinion – opinions that
other consumers look to for guidance on which products to buy – or to avoid.
Consumer opinions have a lot of power and the only way to thrive in such an
environment is to understand exactly what is driving consumer emotion and opinion.
This puts a small amount of power back in the hands of companies to a degree, as with
this information you can solve problems, correct misconceptions, provide desired
products and services, and interact with consumers on their terms. Without that
information, you are sitting in a canoe without oars.
What is Social Sentiment Analysis?
Sentiment analysis is a layer applied to the rest of your analytics, to put data analytics
into context and categorize consumer emotions by type and intensity.
Social listening, social monitoring, image analytics, customer experience analytics – all
of these rely on sentiment analysis for accuracy and usefulness. So, it isn’t something
that stands apart from the rest of your analytics, it completes your analytics package.
For example, a simple NetBase search on the term “gaming and eSports” tells you
there’s a lot of conversation about this favorite pastime. Over the past month there have
been 280+ mentions with potential reach off the charts.

52
Not a big surprise, given the events of 2020 with most of us stuck inside looking for
ways to entertain ourselves. But what do these numbers tell us? Not much of anything,
really. Sure, there’s more positive sentiments indicated by green than negative (red),
but we still don’t know what’s behind them.
Without sentiment analysis you’ll never know what you’re doing right or wrong. More
specifically – you’ll never understand why it’s being perceived at right or wrong. You can
only assumption. And those assumptions are often wrong.
On the other hand, with sentiment analysis you have a ton of clues to explore further
and gain in-depth understanding of where your brand is doing well and where it needs
to rethink messaging:

Terms like “beautiful Gaming King pc” and “EVOS eSports” give you a hint at what
people respond positively to, while terms like “expensive,” “censorship” and “security”
clue you in to some negative consumer feelings.
Defining Sentiment
If only sentiment were as simple as “positive” or “negative.” Sentiment – just as human
emotion – is a wide-ranging spectrum of varying intensity. And intensity matters,
especially when it comes to social media monitoring.
When we calculate Brand Passion, we use a combination of Net Sentiment (a measure
of positivity or negativity, from -100 to 100) and Passion Intensity (the strength of those
emotions, from -100 to 100).

53
This categorizes whether consumers like a brand, are obsessed with it or just neutral.
And this gives brands actionable data.
And as David K. Williams on Forbes put it “ that shows whether someone liked their
food or if they liked your menu.”
Sentiment analysis is all about the details and where and when to focus your energy.
People who like your brand may not need immediate attention. However, if you can
activate the consumers obsessed with your brand, they’ll help convert the fence sitters
with their own enthusiasm.
But it’s not just those who love you that you need to locate, it’s also those who
may despise your brand. For this, you want to focus on what is driving their displeasure
and approach it head-on.
And then there’s those pesky neutral emotions; they shouldn’t be ignored. For these,
you really need to dig down to discover whether they just don’t care, or maybe
something rubbed them the wrong way. Having social media analytics connects you
with them to discover their why, could mean welcoming more consumers to your team.
The right social listening tool will help you discover things that move them, allowing you
to effectively target them with individual messaging that speaks to their passions. It’s not
about your brand – it’s about them.
Sentiment Analysis in Action
The human language is complex which means your social listening tool needs to be
able to break it down to identify emotional terms. Thankfully our sentiment analysis
uses Natural Language Processing (NLP), which can do precisely that, and isn’t limited
to English versus French versus Cantonese, etc. NLP can identify slang and pop culture
terms, as well as emojis, and even images.
And Image analysis is no less important than the rest. It is in our increasingly visual
online world. With great frequency, images stand in for text. And as this trend continues
to build, the value of analysing sentiment in images increases correspondingly.

54
A picture is worth a thousand words, and without text it offers no data unless you have
image analysis to recognize your brand’s logo (for example). Today, it’s critical to have
that post about your brand – and all like it – counted.
And there are numerous ways to apply sentiment data once it’s in hand:
To Measure Brand Health
Think of sentiment analysis like an EKG which shows you peaks and valleys of emotion
indicative of anomalies in overall brand health. Having social media monitoring lets you
investigate what’s behind them and act accordingly, keeping your brand on an even
keel. As an example, below our timeline comparison is set to show sentiment around
Gordon Ramsey.

And just as in medicine, you can explore what’s causing a blip in the radar. We can
investigate this summary above by clicking anywhere on the chart. Selecting the peak
on January 5 brings us to this tweet about Gordon Ramsey’s influence on our
perception of food – and our skill (or lack thereof) when making it:

And negative sentiment shouldn’t be ignored, if you hope to keep your brand healthy. It
can give you more clues to who your audience is as well.
To Find Your Audience
Whatever you think you know about your audience, sentiment analysis can only
improve it. Perhaps you’re Amazon and want to nail down more specifics about your
consumers to effectively campaign. Having a competitive intelligence tool such as Quid
Social can draw out demographics such as gender and which aspects of your company
they are more interested in.
Our bar chart is filtered to show only female interest and is coloured according to
sentiment. For example, we find that Books is their largest interest and overall, they look

55
favorably on this aspect of Amazon. There is some negative sentiment, and a sizable
neutral chunk there though too. One that Amazon may want to explore further.

And you need to approach the way you market to each of these segments differently.
Brands do this by using social media analytics to monitor targeted sentiment, revealing
the common ground amongst members of your audience. This allows them to speak
personally to each of them, at scale, while delivering the experience they want.
And one way to accomplish that is by the help of influencers.

To Identify Influencers
Consumers trust other consumers more than they trust brands and marketers. And this
is where social influencer marketing can come into play for your brand. It’s where
passion intensity truly reveals itself.
Locating social consumers who share their love for your brand on social media that
have their own devoted following can help boost your brand’s messaging. Whether you
pay them to speak on your behalf, or simply engage them with a public thank you, it’s
important to know who they are, as they hold sway over your consumers.
And NetBase News can take it one step further and separate news influencers from
social influencers. This offers perspective around the differences in how the news talks
about your brand versus how people actually see you.

56
As you can see our top social influencer mentioned Amazon once, and from it there
were 270 interactions. And the potential reach is upwards of 45 million. That’s a reach
any brand would love to have, potentially rivaling any news media distribution.
To Identify Emerging Trends
Trends come and go, and just because there’s a trend doesn’t mean you should
automatically seek to leverage it. Sentiment analysis with the aid of social media
monitoring helps you determine how invested your specific audience is in any trends
that come along. And what, exactly, they feel about said trends too.

For example, there were lots of new trends in 2020. The year, as a whole, went a long
way toward bringing back the basics i.e., baking, cooking from scratch, DIY projects,
etc. Will these trends follow us throughout 2021? Will trends in your vertical shift? Only
your social data will tell you. Our sentiment wheel around baking shows large amounts
of positive sentiment, a good indicator that it’s still a big hit.
However, it’s not the complete picture and would need further social analysis around
what kind of cooking is resonating – and with whom. Psychographic insight (capturing
values, opinions, attitudes, interests, and lifestyles) complements demographic intel

57
here. And there could also be regional differences to be aware of (there usually are)!
And all of this should be explored before planting your flag in a trend and calling it ‘the
next big thing.’
To Inspire and Get Feedback on New Products/Services
The ability to get honest feedback so quickly is something we take for granted with
social media, and yet it’s one of the biggest attributes of sentiment analysis. One of the
best uses of this is prior to launching a new product or campaign – to be sure your
audience even wants what you’re offering. Even better? When brands capture intel from
a post that sparks an idea for something consumers crave and can’t find, so they’re
creating DIY options – such as vegan pretzel bites for game day.

To Identify and Resolve Problems


If there’s one thing consumers do well, it’s talk about the brands they love – and those
they don’t. Social media has become the sounding board for all disgruntled customers
to vent.
Sentiment analysis helps you catch these negative sentiments and determine whether
the intensity of emotion is headed into the danger zone. Savvy brands have learned to
set alerts for damaging keywords, so they know immediately if something is about to
spiral out of control.
But if you use social media analytics to keep an eye on sentiment and resolve negative
issues before things get to that point, that’s even better.

Let’s take Kraft Mac and Cheese “Send Noods” campaign as an example. They thought
it would be a funny campaign. However, a significant segment of their consumers were
not impressed:

58
Kraft could have been more careful; however, they quickly realized their mistake and
smoothed things out.
The other side of not monitoring for shifts in consumer sentiment is that you open the
door for competitors to step in and gain the attention of your customers.

To Assess Competitors
We discussed the importance of how you’re doing online, but that’s only part of the
equation. Social media is an open book (for the most part), so apply your sentiment
analysis to competitors as well. After all, you share an audience, so it makes sense to
know what consumers love and hate about other brands in your category.
You’ll save a ton of time and money by letting other brands do your research for you
and make mistakes so you don’t have to.
Making the Case to Stakeholders
With so many social listening tools out there, decision-makers may be motivated to
cheat and get by with lesser tools. Budgets are real, but don’t let the bottom line keep
you from researching your options, as a shoddy intel capture is worthless so you’re not
actually saving money – you’re throwing it away.
Analytics for social media are an investment worth making – especially when that
insight includes sentiment analysis, image analysis, and customer experience analysis.
If you purchase tools that integrate with other systems, like your in-house CRM, you’ll
not only get even more accurate and highly relevant data quickly, but will finally put all
of that previously unused intelligence gathering to work! That’s an ROI most brands can
experience immediately.
And if you’re still unsure, look at the top brands using social analytics effectively, and
follow their lead.
There isn’t much time for trial and error as news spreads fast on social media, so you
want data you can trust straight out of the gate. Sentiment analysis provides that – and
offers solutions that stretch beyond the marketing department to benefit your entire
organization. Talk about something to love! Reach out for a demo today and take
control of your brand health!

Why Audience Analysis?


The digital space is evolving at a faster pace and is becoming more crowded as
companies and people take to digital airways. As a result, the importance of
personalization and precise audience targeting has also increased.

59
It’s now vital to dial into the minutest of details to know who your brand should target,
how (content/messaging), where (channels), when, and why. And all of this is aimed at
engagement, which is just as crucial as reach these days as it aids in brand awareness
and, ultimately, conversions. Our Key Metrics gives us a quick overview of KFC’s
Mentions, Net Sentiment, Total Engagements and Potential Impressions. Any of these
can be dissected for more detailed analysis.

The beauty of audience analysis using social listening tools is the speed with which you
can perform such detailed evaluations. What took lots of time and effort in the past
using old school focus groups and surveys, can now happen within moments or hours.
What Are You Actually Analysing?
Audience analysis helps discover new audiences based in the information they share on
social media, unlike surveys and polls – where you’re limited to a set of conclusions
based on the specific questions asked of the consumer.
Some of the intel gathered using audience analysis may be demographic – like age or
location – but the most rewarding insights is psychographic. It comes from knowing who
they are underneath surface insight to uncover their interests, passions, or how they
see the product or service your brand represents – and how it fits into their life.
This type of analysis leads you to create audience segments – or groups of consumers
with shared passions. Imagine how much more authentic your messaging is when
you’re talking to a group of “Gen-X pizza lovers who have Netflix watch parties in L.A.”
versus “35-50-year-old men.”
Broader demographics are also useful, but they’re more of a stepping off point for
getting to the good stuff. Like how a particular segment views your top competitors –
and why.
The idea is to step beyond your brand and look at the interests of those engaging with
your entire category, and what they care about – in relation to your brand and others
like it, and then beyond that. And your social media listening tool should provide a way
to separate these interests neatly into categories for easier exploration.
Here is an example in the category of Baking. Aside from Food and Drink, Family,
Music and Pets take the largest share of common interests:

60
This is important intel that informs messaging and future potential campaigns – and
even product development. It certainly helps brands understand what drives audiences
to buy. Let’s dig into that part a bit more.
The Mechanics of Audience Analysis
You can apply audience analysis widely to better understand your category, products,
services, even other brands – and all of this helps inform your next move. But what
you’re really looking for is what drives your audience to buy. That starts with knowing
what they care about, versus just making a wild guess.
And you can apply this type of social listening whether you’re looking for something
specific to achieve a brand goal, or a combination of all factors to create an overall
picture of your audience, or varying audience segments.
The key to all of it is social sentiment analysis and this factor can be applied to all other
tactics. Sentiment analysis reveals your cheerleaders, and likewise, most determined
detractors. This helps you asses the actions needed to accelerate your brand – and
avoid sweeping changes for an audience that isn’t invested.
If you have a lot of passionate negative sentiment about customer service, for instance,
you’ll want to focus your efforts on that direction. Or if you’re seeing a lot of negative
conversation about competitors in your category, you know to make sure you keep up
with offering stellar service, and even find ways to stand out amongst the crowd in your
messaging.
Let’s look at an example. Here’s the sentiment graph when we search on the term
“Gaming and eSports” in NetBase:

61
Overall sentiment is positive, and while those spikes of high positivity are important,
perhaps more critical is what drives the negative sentiment.
Either way, exploring the Attributes word cloud gives us some insight into the various
conversations happening, and those which have gained the most traction. Beautiful
Gaming King PC is a positive sentiment which has quite a lot of traction. Clicking on it
reveals what people are saying:

Social media analytics provides information revealing that a gaming chair and mouse
are behind this sentiment.
But words are only part of the picture, remember to include image analysis as you
explore sentiment – not all social conversation is text-based, so you want to be sure you
know if your brand is being visually slammed.
And, of course, if there are images propelling you to the top of the positive end of the
spectrum, you want to know that too – so you can share the most engaging content with
your audience. It might seem a small thing, but Burger King’s Moldy Whopper is and
example of a brand that turned a campaign on its head with a single, startling image.
Let’s break down other key considerations in audience analysis.
Location
Using the topic you’re searching, narrow your results to view the location where the
conversation is happening. Let’s see results for the gaming and eSport, our map shows

62
that 51% of gaming conversations are happening in the US. For further details we could
click on the country and see relative posts, even get as detailed as street level.

If you’re a gaming company such as Xbox, you want to have a sense of where your
audience is and who they are. Social Listening helps you find out. Otherwise, your
competitors will surely use the information to their advantage, because they’re
undoubtedly searching for this insight too. It’s a race to see who does it best!
Gender/Age
Next, look at gender and age to create a baseline for further analysis. Here’s the
breakdown for Starbucks Coffee, filtered for English-language posts:

63
Their audience is comprised of mostly 25–34- and 18–24-year-olds – so millennials and
Gen Z. What could Starbucks do to attract those who fall into older age brackets? What
do their consumers love or hate about Starbucks? And though all ages are talking about
this caffeinated brand – it’s doubtful they’re all talking about the same thing.
Look for the specific topics of conversation broken out by age and use them to create
more meaningful audience segments that speak to specific interests. You’ll likely cross
gender and age lines – and that’s all the better. There are so many ways to approach
this. Let’s look at another!
Topics/Trends
Let’s continue with Starbucks as we go deeper. We’ll look at Trends which offers you a
look at popular Terms, Hashtags, People, Brands and Emojis. All of them reveal the
ways your brand – or category – are being talked about on the social web:

64
For example, many other brands are tagged in conversations directed at Starbucks. Is it
about Stocks and Trading? Is someone waiting at a GM car lot sipping on Starbucks
while they finalize paperwork? If a company isn’t sure who else their consumers are
talking about, they may be missing out on identifying big competitors. Are there
surprises in the mix? Maybe. Are there potential partnership opportunities? Certainly.
Further analysis will reveal more about how to engage their audience.
They already used their intel to look to Beyonce for partnership by placing her albums in
their store. Given the attention she gets via her 164 million followers, she’s also an
influencer. Double win.
Identify Common Interests
The next step of your audience analysis looking for segments of people with common
interests. These could be entirely new audiences for you to engage with.
Vegan food, for example, is not just for Vegans. The world is not made up of only meat
eaters and vegans. There are more and more flexitarians and vegan-curious consumers
today. And that’s why savvy brands such as Mendocino Farms Sandwich Market are
rolling out new menus full of vegan choices. They are reaching more segments with a
common interest.
Identify Influencers
A big part of audience analysis is finding the people who love you most, just for being
you. Or they love your category enough to fall in love with your brand if you can tap into
what inspires that initial passion.
They might be celebrities or organic influencers who have a large and engaged
following of their own. These brand fans wield enormous influence. If identified they can
boost your company and amplify your messaging.
Part of audience analysis is understanding what motivates your audience to share. And
with KFC, they realized that their consumers love audacious things, which lead to them
making the mini movie Recipe for Seduction. But they didn’t stop there, they decided to
make a game console with a warmer just for your fried chicken. And the crowd went
wild:

65
These posts show very high engagement and potential impressions, illustrating how far
a brand can reach if they have the right influencer carrying their message.
And what about detractors? You can make them a segment as well – people to include
when you solve a similar problem, so they can see your efforts and perhaps change
their minds about you. And oftentimes, they offer insight around unmet needs. That’s
valuable intel.
Channels/Content
It’s also important to understand how and where your audience wants to hear from you.
If you’re on Twitter but your audience is all about Instagram, you’re missing your target.
And remember, the social web isn’t confined to Facebook, Twitter and the like. Be sure
to look at blogs, news sites, review sites, forums, etc. Here are the various places
talking about breakfast king IHOP right now. This is looking at all domains, but can be
filtered to show just blogs, forums and even news sources.

66
There will always be surprises in the mix – like Reddit, perhaps – so never just assume.
Look to your social data for answers.
Your Audience Makes Your Brand
Don’t stop at marketing once you’ve identified and targeted audience. Your customers
and prospects are the reason you do everything you do – or they should be. You never
have to go in blind again when conceiving a new product or wonder where the gaps are
in your customer experience.
Your audience offers everything you need to proactively give them what they want, as
well as the info required to solve problems when they occur. But you have to have the
right social media analytics tool to hear them.
Use your audience analysis data throughout your business, and you’ll be unbeatable.

What is Social Media Monitoring?


Social Media Monitoring is often confused with other terms like social media
listening or social media intelligence. But it’s different. And each of these monikers has
its own place in your social analytics toolbox.
Social Media Intelligence is an overarching term covering a few key areas of social
analytics, including social media monitoring and social media listening, with a focus
on Competitor Analysis. It represents the sum of the parts (i.e., data) these other tools
unlock.
Social Media Listening uncovers consumer insights you can apply to brand strategy. It’s
an ongoing task that focuses on getting to know your audience and their emotions, so
you have a baseline understanding of the consumers you want your brand to reach. The
goal is to learn what consumers care most about, and bring that into your customer
experience at every level.
Social media monitoring is also an ongoing endeavour, but with a slightly different
focus. If you think of social listening as creating a baseline for what your audience feels,
social media monitoring is about maintenance of that baseline.
Next Generation AI-powered Social Analytics
Because technology has made the world “smaller” and more competitive, Next Gen AI
Analytics is now the gold standard for achieving next level speed, accuracy, and control
of analytics data. And this is key to capturing big ideas.
Don’t get overly caught up in the terminology though. Think of it this way: If social media
monitoring is the instrument panel of your car, telling you how fast you’re going, and

67
whether you need gas or an oil change, Next Gen AI Analytics is the engine – powering
everything and ensuring you can rely on what your instrumentation shows.
That’s because it captures everything related to your search parameters including
slang, common misspellings, emojis, and even brand logos captured in photos. This
level of detail is vital to conduct your research from a comprehensive dataset. After all, if
you’re missing a portion of the conversation, it will likely skew your results making them
less than ideal to use in your decision-making process.
Next generation artificial intelligence uses natural language processing (NLP) to capture
and categorize social media posts by grammatical structure. This makes it easy for
brands to isolate areas of interest within a conversation for things such as behaviours,
people, brands, objects, hashtags, etc. It can also profile the users themselves for
demographics, psychographics (beliefs, interests), geodata, and much more.

You need this level of precision because you’re looking for anomalies – anything that
stands out from the norm. These anomalies span the emotional spectrum from positive
to negative – and both extremes require awareness and attention. This is why social
media monitoring is done in real-time.
Negative Sentiment Gets Lots of Attention
Let’s look at the negative side of the equation first, since social negativity is so
damaging when it goes viral. There are three crucial things you’re looking for in your
social media monitoring:
When consumers chat about your brand in a negative way on social media, they’re
likely not the only customers who feel that way. Here are some eye-opening statistics to
keep in mind:
1. 50% of customers will do an about-face and shop with a competitor after a
bad customer experience. While you can’t please all the people all the time,
an effective social media monitoring regimen will show you the common pain
points consumers have with your brand. You can use this insight to revitalize
your messaging and customer experience strategy to address these issues
head on.

68
2. Since the beginning of the pandemic, 75% of US customers say they’ve tried
new shopping behaviours for both economic reasons and shifts in personal
priorities. Consumers are quick to adapt to changing circumstances so if your
brand isn’t meeting a need, they’ll find one of your competitors that can. In
addition to ever-changing trends, pandemic disruptions continue to transform
consumer behaviour. As such, social media monitoring is indispensable in
staying ahead of the curve.
3. Social media monitoring tools are your first line of defence in a PR
crisis. PwC’s Global Crisis Survey discovered that 95% of business leaders
feel underprepared for an emergency. A crisis can emerge in the wee hours
of the night when you least expect it. Without a swift and agreeable
resolution, negative sentiment towards your brand can snowball out of control
in just a few hours. Setting alerts in your social media monitoring software
around your brand health metrics is your best defence against an unforeseen
catastrophe. That way you can get messaging out quickly and quell the
flames before it’s too late.
Social Media Monitoring Alerts You to Crises Before They Spiral
The beauty of social media monitoring tools – the best ones, at least – is you can set up
these alerts so you know when there’s a spike in sentiment indicating a topic or post
you shouldn’t ignore. That way you can address issues quickly and stop angry posts
from going viral. Or at least not get blindsided by them when they do!
These alerts use keywords you choose, as well as a metric called passion intensity –
that lets you know the strength of emotions in social posts. When passion intensity
spikes, it’s time to investigate why. You can also add subscribers to your alerts so your
entire team is informed at a moment’s notice.

In his NetBase Quid® Live 2021 presentation, Becoming a Strategic Influencer in your
Organization, Evan Escobedo, Global Lead for Listening, Analytics, and Insight at
Western Union spoke about their robust social media monitoring strategy. In addition to
ad hoc market research and competitive intelligence, Western Union uses social

69
monitoring tools for monthly reporting, real-time monitoring, community
management/resolution, and sentiment analysis.
This type of in-depth social media monitoring is sure to foster familiarity with your brand
health metrics and the ability to spot anomalies. Defining alerts around your metrics
such as net sentiment and passion intensity adds that extra layer of security and
compensates for human error. Crises come in the blink of an eye and the earlier you
catch it, the better.

Getting the Full Picture with Images

Aside from social media sites like Reddit and Tumblr, social media is otherwise devoted
to photos and video. When your brand logo makes appearances in these posts it’s part
of your brand conversation. That’s because they contribute to potential impressions and
play a role in how people see your brand.
As such, it’s critical that you are able to pull these “mentions” into your brand
conversation to determine how these posts contribute to perception. And this includes
photos of your logo without a brand mention in the text field or backwards and altered
logos. Your tools should capture all of it.
Given that, imagine the damage your brand could suffer if someone misappropriates
your logo. Or how much visibility complaints with images could garner. So, your social
monitoring software must include Image Analytics or you’re letting a huge chunk of
content slip beneath your radar. Sadly, not all social media analytics tools offer this
feature.
How Does Social Media Monitoring Work?
Social media monitoring allows you to establish benchmarks around the things that
matter to your brand that you want to track over time. These are commonly centred
around brand health, customer service initiatives, campaign tracking, product reception,
etc. However, brands are free to monitor any measurable attribute found within their
social listening tools to see whether their initiatives are successful or not.
Of course, the nightmare scenario for any brand is reputational damage from a viral
post, and safeguarding against it is one of the many ways social media monitoring
works for brands. This is why it’s so important to be alerted to and take care of anything
brewing in the moment. Because anything left unresolved – or just unrecognized – can
pick up social steam and create a PR crisis. And that’s to say nothing of blatant
attempts to do brands harm by any number of attackers.
Are there cases when you shouldn’t take action? Absolutely. Sometimes responding
only fans the flames and does more harm than good. It can draw more attention to
something that would’ve just gone away on its own. The most important thing is
awareness – and knowing your text analytics tools are accurate.
When your data is reliable, knowing about negative or potentially damaging posts
means you can make smart decisions, and act – quickly – when it’s appropriate. Erring
on the side of caution, by handling problems when they’re small, is always best.
In addition to crisis management, your social media monitoring tools can function in a
predictive capacity as well to help spot disruptions from competitors. That’s because
everything you’re tracking on your own brand can be done in a competitive intelligence
capacity as well.
70
Cam Mackey, CEO at SCIP, touched on this subject during his presentation, Looking
into the Crystal Ball: Takeaways from an Innovative Prediction Market Study, and
pointed out disruptions that are already occurring. He calls it the ‘burning platform’
where incumbents in industries with large capital investment barriers are being
threatened and unseated by unorthodox players.
Examples he gave include:
 Tesla’s market cap is now larger than Toyota, VW, Daimler, GM, Ford, BMW,
Honda, Nissan, Subaru, and Hyundai combined.
 Amazon Logistics now ships more packages in the US than FedEx
 Over 50% of mattresses and 30% of eyeglasses are sold online to consumers
eschewing traditional retailers.
Mackey goes on to say that there’s a 77% likelihood that this type of non-traditional
disruption will be the greatest risk posed to businesses going forward. That’s a factor
that could spell the potential downfall for brands that aren’t taking the ‘little guys’ into
account going forward. Savvy brands can set their social media monitoring sights at
category innovations and up-and-coming start-ups with disruptive power to safeguard
their competitive advantage.

What About Positive Sentiment?


Social media monitoring isn’t all about gloom and doom. You also want to be alerted to
positive swells in sentiment – because anything surrounding your brand garnering
positive sentiment can be looked at as a rung on the ladder leading upwards.
As we just touched on, you can monitor sentiment in competitor’s social conversations
to uncover what their consumers love about their products and services. This is
strategic intel that can be put to work in your marketing and R&D departments. It’s how
you leverage what’s already working and make it better.

71
Additionally, you’ll want to monitor the positive side of emerging trends to understand
who the audience is who views it favourably. This is extremely helpful in finding how
much of your audience is talking in these conversations to justify whether or not to align
your brand with a particular trend or not.
Brand Health Management
Brand health is a big reason social media monitoring should be part of your daily
operations – but it’s hardly the only one. Social analytics uncovered through social
monitoring are also not limited to the marketing department. Your monitoring tools will
allow you to keep tabs on trending conversations within your brand narrative. These can
all be explored to monitor movement over time so you’ll know whether interests is
waxing or waning.
This ability allows you to grow your brand in the areas of positive sentiment while
addressing the concerns found in others. For example, here’s a one-month snapshot in
Quid Social of L’Oréal’s social conversation showing the top ten topic clusters colored
by sentiment. Tracking this analysis over time will show you where to apply messaging
and product innovation, helping to galvanize your brand health.

L’Oréal one-month social snapshot of top brand conversations with sentiment. 12/16/21-
1/16/22
Additionally, when you uncover data about customer service issues while tracking
conversational movement, it’s not just about putting out that fire. It lets you identify if
there’s a specific area where you’re not making par in your category, or where you’re

72
not living up to the expectations consumers have of your brand or products. It’s also
how you know you’re doing better than expected, or better than competitors, and why.
Imagine the money you can save by following the product launch of a competitor, and
discovering consumers hate it? If you had a similar idea, you can avoid that loss – or
offer a better version and claim that audience for yourself.
Identifying new influencers and monitoring the activity of current ones is another area
where Social Monitoring comes in handy. Those positive sentiment spikes may lead you
to a social celebrity worth putting to work for your brand. It also helps you see how
influencers you already use is succeeding – by alerting you to content that is particularly
engaging, for example.
Social monitoring benefits all areas of brand operations from customer service, to
marketing, to sales, to research and development. So when you spend on social
monitoring tools, you’re making an investment that brings far-reaching returns.
How Do I Choose a Social Media Monitoring Tool?
A question often asked – especially by smaller brands – is whether you really need to
pay for social media monitoring tools. After all, there are a lot of free tools out there.
That’s true – and when you’re just getting started, they’re better than nothing. But you
have to remember, you get what you pay for.
First, let’s share some top social media monitoring tools to inform your efforts – and
then we’ll share criteria for selecting one! Here are our picks
1. NetBase Quid® offers social media monitoring tools for every use case, from
tracking and measuring followers, to shifting consumer demographics and trends. It
tracks shifting sentiments over time and has everything your team needs to set your
company up for success.
Powered by next generation AI, its natural language processing (NLP) capabilities
capture the chatter online and dissects it to reveal what your customers really care
about. Assumptions are dangerous, after all. And it also captures the context behind
words, offering an accurate, in-depth understanding of both consumer needs and
market shifts – and all in real-time
2. Rival IQ tracks every conceivable competitive measurement here, including
followers and engagements. And one of the biggest highlights here is its best-in-
class competitor analysis.
The adage guiding readers to “keep your friends close, but your enemies closer” is
certainly applicable to business. With Rival IQ’s social media monitoring tools, you can
set alerts to notify you of any move your competitors make. This tool has a valuable set
of audience growth metrics, locating which social channels are seeing the most follower
growth, so you can plan accordingly and aren’t caught off guard by a competitor
snagging your market share.
And it gets even more detailed, offering you a look at the average activity of your
competitors. You’ll get intel on post frequency, emojis used in their bios and even
hashtags being utilized to drive significant traffic.
But it’s not merely about watching your competitions’ steps, as Rival IQ offers
competitive benchmarking to see how your brand measures up in the industry – how it
stands alone. And you can identify phrases, hashtags and topics that drive conversation
within your category to see where your brand can start interacting with those
conversations.

73
3. Reputation helps companies track what consumers are saying about them, not
just on social media but on review sites as well. With their tool, you can track, manage,
and respond to reviews from all over the web in one place. Or take advantage of their
review booster and get access to custom response templates to use for automatic
responses to reviews. This saves you valuable time that can be better directed in other
areas – like social media monitoring.
4. Monday helps brands map out social media posts and collaborate with their entire
team, ensuring that you’re all in the same pond and swimming in the same direction.
Their social media planner template offers ideas for posts, so a brand could alternate
between quotes, questions, resharing funny memes in your industry, or current world
events, for example. The template keeps you on track and prevents your social media
page from becoming too repetitive. Once you have everything planned out, you can
schedule your posts to be published right away or on a future date.
5. Storyclash offers an AI-powered search engine that allows brands to track
keywords, hashtags, account names and swipe up links in every part of influencer
content. And because of their text recognition, the text in your brand logo can be
tracked. This way, you can capture and measure all mentions of your brand. This is
particularly helpful when running an influencer campaign. And with Storyclash’s social
media monitoring tool, you can keep track of KPIs, making better data-driven decisions.
And it monitors TikTok, Facebook, YouTube, Twitter and Instagram.
6. Loomly’s social media monitoring tools help brands create and maintain a
consistent brand voice across multiple social channels. With their platform, you have
multiple tools and collaborative views at your disposal, keeping all team members on
the same page, at all times. It also offers ad mockups and commenting, version logs
and approval flow so everyone is speaking the same language. And there’s a database
of ideas to pull from as well.
7. Upfluence locates possible influencers for your company and then allows you to
track them, measure their performance, and see if they are the right fit for you. Not
every influencer is going to reach the demographic you need, nor want. And in fact, you
may have followers already that are influencers in their own right. Upfluence locates
these micro influencers for you
At a certain level, making do with free or subpar tools is like trying to hit a homerun for
the Red Sox using a child’s toy bat. It simply won’t get the job done. But we understand
that it’s overwhelming to sift through the numerous solutions out there. So, here’s what
matters:
Accuracy and Speed:
Social data is useless if it’s inaccurate. The ability to parse multiple languages –
including slang and emojis – is critical to this point. So is the ability to analyze the
sentiment behind posts and images. And it all has to happen in real-time. Today, that
means AI analytics driving your social data engine – because you don’t have time
to waste on inaccurate results, or data that takes too long to come through.
How do you know if your tools are accurate? Transparency. What that means is you
have the ability to climb into the posts behind the pretty charts and graphs. This is
critical anyway for getting to the bottom of the questions your brand needs answered. If

74
your tool says your net sentiment is 95% and you can’t click into the posts that compose
this number then you need to look elsewhere.
Social media monitoring is built on natural language processing (NLP) which is the AI’s
way of parsing text the way a human would. If you climb into these posts and things
look fishy, then that’s a huge red flag. You are the ultimate judge on the trustworthiness
of your tools, but they have to let you under the hood to watch the engine run.
Take a walk around your tools and kick the tires. Run a social media image search on
your brand logo. Will it give you this? Because

Integration:
Social media monitoring, as we stated at the beginning of this post, is just one piece of
the puzzle. It’s the ongoing effort to track your established baseline. It only works well
when combined with other modalities, like social listening.
Additionally, you may want the ease of engaging back through social scheduling
software or combining CRM and other data. You want a social monitoring tool that can
interface with the tools you already use.
No 3rd Party Add-ons: They might seem cool, but they won’t be under the control of your
vendor. It’s better to go with a vendor dedicated to innovating on their own. Otherwise,
you may fall in love with a feature that suddenly goes away.
Commitment to Innovation:
This might not seem like a big deal – aren’t all tools ultimately comparable at a certain
level? Not quite. Tool providers that don’t update their offerings regularly may fall behind
when big changes happen suddenly – and it’s not like social platforms are going out of
their way to let you know they just changed their algorithms. When your tools fall
behind, they take you with them. Meanwhile, those who invested in best-in-class tools
fly further past you.
Industry/Peer-Recognized:
The one thing the internet offers is transparency. If you’re unsure of a tool’s merit, the
reviews will surely reveal all you need to know. Just as consumers rely on Consumer
Reports when purchasing things like cars and appliances, there are places to look when
you want reliable information about social media analytics tools. TrustRadius, Forrester

75
and G2 Crowd are just a few where you can compare top solutions. We’ve been listed
at all three and more.

Suggested Business Models on social media: In the early days of entry onto social
media platforms, most start up enthusiasts look at getting as much traffic as possible on
their website and then find ways to generate revenues through that traffic. This,
however, cannot be a long-term strategy as it limits ones options to simple techniques
such as advertisements and sending promotional communication to the group. These
techniques are neither consistent nor long lasting. However, there are unique ways by
which businesses can generate revenue on this medium from early on. Literature says
that business models based on digital platforms like social media are a new
phenomenon, and there is still need of research to understand it from various angles (El
Sawy and Pereira, 2013). Some of these possible general business models online are
discussed below:

1. Freemium Model: The venture capitalist Fred Wilson explained the concept of
Freemium first in 2006. A journalist called Chris Anderson described and popularized
this concept in his book titled ―Free: The Future of a Radical Price‖ (Anderson, 2009).
Under this model, the basic service is offered free of charge and premium services with
advanced features are made available for those members who pay (Mounier, 2011)
Mounier (2011) in his research has explained how Freemium is a sustainable economic
model for open access electronic publishing. Some of the other successful examples of
this model are Flickr, LinkedIn, and UserVoice. Cloud service providers such as
Infrastructure providers (IaaS), platform providers (PaaS) to software service providers
(SaaS) – are also examples of this. Amazon allows anyone to create a simple cloud in
their infrastructure as a trial service for 12 months. A good way to go about this is to
initially offer limited services on the site and offer them for free. Later taking customer
inputs on the willingness to pay for some of the services would be best. Deciding on the
best value preposition for customers with a balance between free and paid is of utmost
importance for the success of firms depending on this model.

2. Affiliate Model: In this business model the firm has an arrangement with another
business firm that may be offline, to drive traffic to their products or services. The
enterprise using this model does not need to carry inventories, process payments or
take orders as it only acts as a connecting point (intermediary) between potential buyers
and sellers. The firm is responsible for developing and updating relevant content on its
portal that shall attract relevant buyers for its clients. The portal carries advertisements,
or links to potential seller ‘s websites or connecting points. Businesses such as
Illuminated Mind, Show Money and DIY Themes are few examples. In order for this
model to be successful, it is critical that the start-up strongly develops a relevant
community of likeminded and common interest persons. If the community consists of a
large number but irrelevant and uninterested audience for the sellers, this model is
bound to fail. Relevant, interest building and updated content is necessary to convince
buyers to purchase what you may recommend.

76
3. Subscription Model: The Subscription Model is traditionally used in the print-based
newspaper industry. It is a traditional revenue model that describes the proportionate
split between subscription and advertising income of a newspaper (Erlindson, 1995).
Literature noted that the subscription model is financially promising for newspapers
(Chyi, 2005; Donatello, 1996 as cited in Kahin and Varian, 2000). The subscription
model proposes income for online ventures by offering subscriptions. It is an economic
model that has been experimented upon continuously in the industry. Due to these
experiments, many versions of this model have been evolved such as New Subscriber
Model, Maturation Model, and Multiple Subscriber Model (Mings and White, 2000).
Under this model, users pay a fee either on a monthly or a yearly basis for a product or
service. One popular example of this is Netflix. Several online enterprises use this
model such as internet providers, software providers and other online websites. The
bigger challenge in this model is to increase recall of the subscription made. If users
subscribe to your service and then forget about it, it is a failed model. The site needs to
be continuously alive and connected to subscribers in order to ensure that they come
back.

4. Virtual Goods Model: This model deals with the business of non-physical and
intangible products or services such as points, gifts, game wins on a website. Some
examples are Facebook gifts and the online sticker markets. Particularly, game
products are the favourites of online consumers. While margins may be high in this kind
of a business model as the cost is almost none, the critical element for success is to
create relevant and customised virtual goods for the audience targeted. The virtual good
sales model attempts to also utilise the positive network externalities from non-paying
users and price discriminates for differently price sensitive users by slicing the total
value proposition bundle of the virtual world via selling virtual goods. Following this line
of thought, the difference between free-to-play (a form of Freemium model) and
premium models is the utilisation of the positive network effects from non-paying users.
Price discrimination refers to capturing value from differently price sensitive customers
for identical products (Shapiro and Varian 1999). Of course, this does not directly mean
that price discrimination would be implemented in pricing of a single virtual good, but
price discriminating the total value proposition bundle of the virtual world.

5. Advertising Model: Industry has been implementing traditional Advertising models


since a long time, such as sponsorships, banner or display advertising, advertisements,
design and development, targeted ads, classified, personal advertisements, and
auctions (Mings and White, 2000). Traditional model ‘s usefulness in case of the online
newspapers is discussed in the Erlindson‘s (1995) study. Now, it ‘s time for the digital
intervention in it. Digitization allows interactivity in this model. It also creates
opportunities for consumer research and purchases (Mings and White, 2000).
Advertising model is the most commonly observable on the online platform today across
various social media websites. Sites such as Facebook also have now begun to see the
advantages and potential in this model and have begun to customize the information
shared with user base on their profiles to cater to clients who wish to advertise to
relevant target audiences through their Facebook page. Under this model, websites rely
on advertisements to generate revenue against the traffic on the site. The higher the

77
traffic on the site, the higher the charges for online ad space on the site. Some sites
also provide data on user demographics and interests to increase the value preposition
for clients. The key challenge under this model is to build a strong user base, who are
loyal enough to pay for the service you provide and to invest their money on the clients
who showcase their products on your site.

Text Mining and Word Cloud


Text mining methods allow us to highlight the most frequently used keywords in a paragraph of
texts. One can create a word cloud, also referred as text cloud or tag cloud, which is a visual
representation of text data.
The procedure of creating word clouds is very simple in R if you know the different steps to
execute. The text mining package (tm) and the word cloud generator package (word cloud) are
available in R for helping us to analyse texts and to quickly visualize the keywords as a word
cloud.

3 reasons you should use word clouds to present your text data

1. Word clouds add simplicity and clarity. The most used keywords stand out better in
a word cloud
2. Word clouds are a potent communication tool. They are easy to understand, to be
shared and are impactful
3. Word clouds are visually engaging than a table data

Who is using word clouds?

 Researchers: for reporting qualitative data


 Marketers: for highlighting the needs and pain points of customers
 Educators: to support essential issues
 Politicians and journalists
 social media sites: to collect, analyse and share user sentiments

The 5 main steps to create word clouds in R

Step 1: Install and load the required packages

Type the R code below, to install and load the required packages:
=====================================Code===============================
=====
# Install
install.packages("tm") # for text mining
install.packages("SnowballC") # for text stemming
install.packages("wordcloud") # word-cloud generator
install.packages("RColorBrewer") # color palettes

78
# Load
library("tm")
library("SnowballC")
library("wordcloud")
library("RColorBrewer")

Step 2: Text mining

1. load the text

The text is loaded using Corpus() function from text mining (tm) package. Corpus is a list of a
document (in our case, we only have one document).

To import the file saved locally in your computer, type the following R code. You will be asked to
choose the text file interactively.
In the example below, I’ll load a .txt file hosted on STHDA website:
================================Code====================================
# Read the text file from internet
filePath <- "http://www.sthda.com/sthda/RDoc/example-files/martin-luther-king-i-have-a-dream-
speech.txt"
text <- readLines(filePath)

2. Load the data as a corpus and inspect the content of the document
==========================Code==========================================
inspect(docs)

Text transformation

Transformation is performed using tm_map() function to replace, for example, special characters
from the text.
Replacing “/”, “@” and “|” with space:
==================================Code==================================
=====
toSpace <- content_transformer(function (x , pattern ) gsub(pattern, " ", x))
docs <- tm_map(docs, toSpace, "/")
docs <- tm_map(docs, toSpace, "@")
docs <- tm_map(docs, toSpace, "\\|")

79
Cleaning the text

the tm_map() function is used to remove unnecessary white space, to convert the text to lower
case, to remove common stopwords like ‘the’, “we”.
The information value of ‘stopwords’ is near zero due to the fact that they are so common in a
language. Removing this kind of words is useful before further analyses. For ‘stopwords’,
supported languages are danish, dutch, english, finnish, french, german, hungarian, italian,
norwegian, portuguese, russian, spanish and swedish. Language names are case sensitive.
You could also remove numbers and punctuation
with removeNumbers and removePunctuation arguments.
Another important preprocessing step is to make a text stemming which reduces words to their
root form. In other words, this process removes suffixes from words to make it simple and to get
the common origin. For example, a stemming process reduces the words “moving”, “moved”
and “movement” to the root word, “move”.

The R code below can be used to clean your text:


=======================Code=============================================
# Convert the text to lower case
docs <- tm_map(docs, content_transformer(tolower))
# Remove numbers
docs <- tm_map(docs, removeNumbers)
# Remove english common stopwords
docs <- tm_map(docs, removeWords, stopwords("english"))
# Remove your own stop word
# specify your stopwords as a character vector
docs <- tm_map(docs, removeWords, c("blabla1", "blabla2"))
# Remove punctuations
docs <- tm_map(docs, removePunctuation)
# Eliminate extra white spaces
docs <- tm_map(docs, stripWhitespace)
# Text stemming
# docs <- tm_map(docs, stemDocument)
dtm <- TermDocumentMatrix(docs)

Step 3: Build a term-document matrix

Document matrix is a table containing the frequency of the words. Column names are words and
row names are documents. The function TermDocumentMatrix() from text mining package can
be used as follow :
============================ Code=======================================
m <- as.matrix(dtm)
v <- sort(rowSums(m),decreasing=TRUE)
d <- data.frame(word = names(v),freq=v)
head(d, 10)

80
Step 4: Generate the Word cloud

The importance of words can be illustrated as a word cloud as follow:

============================Code======================================
set.seed(1234)
wordcloud(words = d$word, freq = d$freq, min.freq = 1,
max.words=200, random.order=FALSE, rot.per=0.35,
colors=brewer.pal(8, "Dark2"))
findFreqTerms(dtm, lowfreq = 4)
findAssocs(dtm, terms = "freedom", corlimit = 0.3)

The above word cloud clearly shows that “Will”, “freedom”, “dream”, “day” and “together” are the
five most important words in the “I have a dream speech” from Martin Luther King.
Arguments of the word cloud generator function:
 words : the words to be plotted
 freq : their frequencies
 min.freq : words with frequency below min.freq will not be plotted
 max.words : maximum number of words to be plotted
 random.order : plot words in random order. If false, they will be plotted in decreasing
frequency
 rot.per : proportion words with 90 degree rotation (vertical text)
 colors : color words from least to most frequent. Use, for example, colors =“black” for
single color.
You can have a look at the frequent terms in the term-document matrix as follow. In the example
below we want to find words that occur at least four times:

You can analyze the association between frequent terms (i.e., terms which correlate) using
findAssocs() function. The R code below identifies which words are associated with “freedom” in I
have a dream speech:

Plot word frequencies

The frequency of the first 10 frequent words are plotted:


==========================Code==========================================
barplot(d[1:10,]$freq, las = 2, names.arg = d[1:10,]$word,
col ="lightblue", main ="Most frequent words",
ylab = "Word frequencies")
========================================================================

81

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy