RESEARCH DEVELOPMENT Lesson 7
RESEARCH DEVELOPMENT Lesson 7
(LESSON 7)
BY
MR SHADRECK PEARSON
ORGANIZED BY
So, to help you get the process started, we shine a spotlight on data collection.
Data Collection is the process of gathering, measuring, and analyzing accurate data from
a variety of relevant sources to find answers to research problems, answer questions,
evaluate outcomes, and forecast trends and probabilities.
Our society is highly dependent on data, which underscores the importance of collecting
it. Accurate data collection is necessary to make informed business decisions, ensure
quality assurance, and keep research integrity.
During data collection, the researchers must identify the data types, the sources of data,
and what methods are being used. We will soon see that there are many different data
collection methods. There is heavy reliance on data collection in research, commercial,
and government fields.
Before an analyst begins collecting data, they must answer three questions first:
C. What methods and procedures will be used to collect, store, and process the information?
Qualitative data covers descriptions such as color, size, quality, and appearance.
Quantitative data, unsurprisingly, deals with numbers, such as statistics, poll numbers,
percentages, etc.
1
Before a judge makes a ruling in a court case or a general creates a plan of attack, they
must have as many relevant facts as possible. The best courses of action come from
informed decisions, and information and data are synonymous.
The concept of data collection isn’t a new one, as we’ll see later, but the world has
changed. There is far more data available today, and it exists in forms that were unheard
of a century ago. The data collection process has had to change and grow with the times,
keeping pace with technology.
Whether you’re in the world of academia, trying to conduct research, or part of the
commercial sector, thinking of how to promote a new product, you need data collection
to help you make better choices.
Now that you know what data collection is and why we need it, let's take a look at the
different methods of data collection. While the phrase “data collection” may sound all
high-tech and digital, it doesn’t necessarily entail things like computers, big data, and the
internet. Data collection could mean a telephone survey, a mail-in comment card, or even
some guy with a clipboard asking passersby some questions. But let’s see if we can sort
the different data collection methods into a semblance of organized categories.
The following are seven primary methods of collecting data in business analytics:
Surveys
Transactional Tracking
Observation
Online Tracking
Forms
Data collection breaks down into two methods. As a side note, many terms, such as
techniques, methods, and types, are interchangeable and depending on who uses them.
One source may call data collection techniques “methods,” for instance. But whatever
labels we use, the general concepts and breakdowns apply across the board whether we’re
talking about marketing analysis or a scientific research project.
2
The two methods are:
1. Primary
As the name implies, this is original, first-hand data collected by the data researchers.
This process is the initial information gathering step, performed before anyone carries out
any further or related research. Primary data results are highly accurate provided the
researcher collects the information. However, there’s a downside, as first-hand research is
potentially time-consuming and expensive.
2. Secondary
Secondary data is second-hand data collected by other parties and already having
undergone statistical analysis. This data is either information that the researcher has
tasked other people to collect or information the researcher has looked up. Simply put, its
second-hand information. Although it’s easier and cheaper to obtain than primary
information, secondary information raises concerns regarding accuracy and authenticity.
Interviews
The researcher asks questions of a large sampling of people, either by direct interviews or
means of mass communication such as by phone or mail. This method is by far the most
common means of data gathering.
Projective data gathering is an indirect interview, used when potential respondents know
why they're being asked questions and hesitate to answer. For instance, someone may be
reluctant to answer questions about their phone service if a cell phone carrier
representative poses the questions. With projective data gathering, the interviewees get an
incomplete question, and they must fill in the rest, using their opinions, feelings, and
attitudes.
Delphi Technique
3
The Oracle at Delphi, according to Greek mythology, was the high priestess of Apollo’s
temple, who gave advice, prophecies, and counsel. In the realm of data collection,
researchers use the Delphi technique by gathering information from a panel of experts.
Each expert answers questions in their field of specialty, and the replies are consolidated
into a single opinion.
Focus Groups
Focus groups, like interviews, are a commonly used technique. The group consists of
anywhere from a half-dozen to a dozen people, led by a moderator, brought together to
discuss the issue.
Questionnaires
Unlike primary data collection, there are no specific collection methods. Instead, since
the information has already been collected, the researcher consults various data sources,
such as:
Financial Statements
Sales Reports
Retailer/Distributor/Deal Feedback
Business Journals
Trade/Business
Magazines
The internet
Now that we’ve explained the various techniques, let’s narrow our focus even further by
looking at some specific tools. For example, we mentioned interviews as a technique, but
we can further break that down into different interview types (or “tools”).
4
Word Association
The researcher gives the respondent a set of words and asks them what comes to mind
when they hear each word.
Sentence Completion
Researchers use sentence completion to understand what kind of ideas the respondent has.
This tool involves giving an incomplete sentence and seeing how the interviewee finishes
it.
Role-Playing
Respondents are presented with an imaginary situation and asked how they would act or
react if it was real.
In-Person Surveys
Online/Web Surveys
These surveys are easy to accomplish, but some users may be unwilling to answer
truthfully, if at all.
Mobile Surveys
Phone Surveys
No researcher can call thousands of people at once, so they need a third party to handle
the chore. However, many people have call screening and won’t answer.
Observation
Sometimes, the simplest method is the best. Researchers who make direct observations
collect data quickly and easily, with little intrusion or third-party bias. Naturally, it’s only
effective in small-scale situations.
5
Accurate data collecting is crucial to preserving the integrity of research, regardless of the
subject of study or preferred method for defining data (quantitative, qualitative). Errors
are less likely to occur when the right data gathering tools are used (whether they are
brand-new ones, updated versions of them, or already available).
Among the effects of data collection done incorrectly, include the following:
When these study findings are used to support recommendations for public policy, there
is the potential to result in disproportionate harm, even if the degree of influence from
flawed data collecting may vary by discipline and the type of investigation.
In order to assist the errors detection process in the data gathering process, whether they
were done purposefully (deliberate falsifications) or not, maintaining data integrity is the
main justification (systematic or random errors).
Quality assurance and quality control are two strategies that help protect data integrity
and guarantee the scientific validity of study results.
Quality control - tasks that are performed both after and during data collecting
1. Quality Assurance
As data collecting comes before quality assurance, its primary goal is "prevention" (i.e.,
forestalling problems with data collection). The best way to protect the accuracy of data
collection is through prevention. The uniformity of protocol created in the thorough and
exhaustive procedures manual for data collecting serves as the best example of this
proactive step.
6
The likelihood of failing to spot issues and mistakes early in the research attempt
increases when guides are written poorly. There are several ways to show these
shortcomings:
Failure to determine the precise subjects and methods for retraining or training staff
employees in data collecting
There isn't a system in place to track modifications to processes that may occur as the
investigation continues.
Uncertainty regarding the date, procedure, and identity of the person or people in charge
of examining the data
Incomprehensible guidelines for using, adjusting, and calibrating the data collection
equipment.
2. Quality Control
Despite the fact that quality control actions (detection/monitoring and intervention) take
place both after and during data collection, the specifics should be meticulously detailed
in the procedures manual. Establishing monitoring systems requires a specific
communication structure, which is a prerequisite. Following the discovery of data
collection problems, there should be no ambiguity regarding the information flow
between the primary investigators and staff personnel. A poorly designed communication
system promotes slack oversight and reduces opportunities for error detection.
Direct staff observation conference calls, during site visits, or frequent or routine
assessments of data reports to spot discrepancies, excessive numbers, or invalid codes can
all be used as forms of detection or monitoring. Site visits might not be appropriate for all
disciplines. Still, without routine auditing of records, whether qualitative or quantitative,
it will be challenging for investigators to confirm that data gathering is taking place in
accordance with the manual's defined methods.
Problems with data collection, for instance, that call for immediate action include:
Fraud or misbehavior
7
Systematic mistakes, procedure violations
Researchers are trained to include one or more secondary measures that can be used to
verify the quality of information being obtained from the human subject in the social and
behavioral sciences where primary data collection entails using human subjects.
For instance, a researcher conducting a survey would be interested in learning more about
the prevalence of risky behaviors among young adults as well as the social factors that
influence these risky behaviors' propensity for and frequency.
There are some prevalent challenges faced while collecting data, let us explore a few of
them to understand them better and avoid them.
The main threat to the broad and successful application of machine learning is poor data
quality. Data quality must be your top priority if you want to make technologies like
machine learning work for you. Let's talk about some of the most prevalent data quality
problems in this blog article and how to fix them.
2. Inconsistent Data
When working with various data sources, it's conceivable that the same information will
have discrepancies between sources. The differences could be in formats, units, or
occasionally spellings. The introduction of inconsistent data might also occur during firm
mergers or relocations. Inconsistencies in data have a tendency to accumulate and reduce
the value of data if they are not continually resolved. Organizations that have heavily
focused on data consistency do so because they only want reliable data to support their
analytics.
3. Data Downtime
Data is the driving force behind the decisions and operations of data-driven businesses.
However, there may be brief periods when their data is unreliable or not prepared.
Customer complaints and subpar analytical outcomes are only two ways that this data
unavailability can have a significant impact on businesses. A data engineer spends about
80% of their time updating, maintaining, and guaranteeing the integrity of the data
8
pipeline. In order to ask the next business question, there is a high marginal cost due to
the lengthy operational lead time from data capture to insight.
Schema modifications and migration problems are just two examples of the causes of
data downtime. Data pipelines can be difficult due to their size and complexity. Data
downtime must be continuously monitored, and it must be reduced through automation.
4. Ambiguous Data
Even with thorough oversight, some errors can still occur in massive databases or data
lakes. For data streaming at a fast speed, the issue becomes more overwhelming. Spelling
mistakes can go unnoticed, formatting difficulties can occur, and column heads might be
deceptive. This unclear data might cause a number of problems for reporting and
analytics.
5. Duplicate Data
Streaming data, local databases, and cloud data lakes are just a few of the sources of data
that modern enterprises must contend with. They might also have application and system
silos. These sources are likely to duplicate and overlap each other quite a bit. For instance,
duplicate contact information has a substantial impact on customer experience. If certain
prospects are ignored while others are engaged repeatedly, marketing campaigns suffer.
The likelihood of biased analytical outcomes increases when duplicate data are present. It
can also result in ML models with biased training data.
While we emphasize data-driven analytics and its advantages, a data quality problem
with excessive data exists. There is a risk of getting lost in an abundance of data when
searching for information pertinent to your analytical efforts. Data scientists, data
analysts, and business users devote 80% of their work to finding and organizing the
appropriate data. With an increase in data volume, other problems with data quality
become more serious, particularly when dealing with streaming data and big files or
databases.
7. Inaccurate Data
For highly regulated businesses like healthcare, data accuracy is crucial. Given the
current experience, it is more important than ever to increase the data quality for COVID-
19 and later pandemics. Inaccurate information does not provide you with a true picture
of the situation and cannot be used to plan the best course of action. Personalized
customer experiences and marketing strategies underperform if your customer data is
inaccurate.
9
Data inaccuracies can be attributed to a number of things, including data degradation,
human mistake, and data drift. Worldwide data decay occurs at a rate of about 3% per
month, which is quite concerning. Data integrity can be compromised while being
transferred between different systems, and data quality might deteriorate with time.
8. Hidden Data
The majority of businesses only utilize a portion of their data, with the remainder
sometimes being lost in data silos or discarded in data graveyards. For instance, the
customer service team might not receive client data from sales, missing an opportunity to
build more precise and comprehensive customer profiles. Missing out on possibilities to
develop novel products, enhance services, and streamline procedures is caused by hidden
data.
Finding relevant data is not so easy. There are several factors that we need to consider
while trying to find relevant data, which include:
Relevant Domain
Relevant demographics
Relevant Time period and so many more factors that we need to consider while trying
finding relevant data.
Data that is not relevant to our study in any of the factors render it obsolete and we
cannot effectively proceed with its analysis. This could lead to incomplete research or
analysis, re-collecting data again and again, or shutting down the study.
Determining what data to collect is one of the most important factors while collecting
data and should be one of the first factors while collecting data. We must choose the
subjects the data will cover, the sources we will be used to gather it, and the quantity of
information we will require. Our responses to these queries will depend on our aims, or
what we expect to achieve utilizing your data. As an illustration, we may choose to gather
information on the categories of articles that website visitors between the ages of 20 and
50 most frequently access. We can also decide to compile data on the typical age of all
the clients who made a purchase from your business over the previous month.
10
Not addressing this could lead to double work and collection of irrelevant data or ruining
your study as a whole.
Big data refers to exceedingly massive data sets with more intricate and diversified
structures. These traits typically result in increased challenges while storing, analyzing,
and using additional methods of extracting results. Big data refers especially to data sets
that are quite enormous or intricate that conventional data processing tools are
insufficient. The overwhelming amount of data, both unstructured and structured, that a
business faces on a daily basis.
The amount of data produced by healthcare applications, the internet, social networking
sites social, sensor networks, and many other businesses are rapidly growing as a result of
recent technological advancements. Big data refers to the vast volume of data created
from numerous sources in a variety of formats at extremely fast rates. Dealing with this
kind of data is one of the many challenges of Data Collection and is a crucial step toward
collecting effective data.
Poor design and low response rates were shown to be two issues with data collecting,
particularly in health surveys that used questionnaires. This might lead to an insufficient
or inadequate supply of data for the study. Creating an incentivized data collection
program might be beneficial in this case to get more responses.
In the Data Collection Process, there are 5 key steps. They are explained briefly below:
The first thing that we need to do is decide what information we want to gather. We must
choose the subjects the data will cover, the sources we will use to gather it, and the
quantity of information that we would require. For instance, we may choose to gather
information on the categories of products that an average e-commerce website visitor
between the ages of 30 and 45 most frequently searches for.
The process of creating a strategy for data collection can now begin. We should set a
deadline for our data collection at the outset of our planning phase. Some forms of data
we might want to continuously collect. We might want to build up a technique for
tracking transactional data and website visitor statistics over the long term, for instance.
11
However, we will track the data throughout a certain time frame if we are tracking it for a
particular campaign. In these situations, we will have a schedule for when we will begin
and finish gathering data.
We will select the data collection technique that will serve as the foundation of our data
gathering plan at this stage. We must take into account the type of information that we
wish to gather, the time period during which we will receive it, and the other factors we
decide on to choose the best gathering strategy.
4. Gather Information
Once our plan is complete, we can put our data collection plan into action and begin
gathering data. In our DMP, we can store and arrange our data. We need to be careful to
follow our plan and keep an eye on how it's doing. Especially if we are collecting data
regularly, setting up a timetable for when we will be checking in on how our data
gathering is going may be helpful. As circumstances alter and we learn new details, we
might need to amend our plan.
It's time to examine our data and arrange our findings after we have gathered all of our
information. The analysis stage is essential because it transforms unprocessed data into
insightful knowledge that can be applied to better our marketing plans, goods, and
business judgments. The analytics tools included in our DMP can be used to assist with
this phase. We can put the discoveries to use to enhance our business once we have
discovered the patterns and insights in our data.
_________________________
12
As is well known, gathering primary data is costly and time intensive. The main
techniques for gathering data are observation, interviews, questionnaires, schedules, and
surveys.
The term "data collecting tools" refers to the tools/devices used to gather data, such as a
paper questionnaire or a system for computer-assisted interviews. Tools used to gather
data include case studies, checklists, interviews, occasionally observation, surveys, and
questionnaires.
While qualitative research focuses on words and meanings, quantitative research deals
with figures and statistics. You can systematically measure variables and test hypotheses
using quantitative methods. You can delve deeper into ideas and experiences using
qualitative methodologies.
While there are numerous other ways to get quantitative information, the methods
indicated above—probability sampling, interviews, questionnaire observation, and
document review—are the most typical and frequently employed, whether collecting
information offline or online.
User research that includes both qualitative and quantitative techniques is known as
mixed methods research. For deeper user insights, mixed methods research combines
insightful user data with useful statistics.
13