Amrcb Unit 5
Amrcb Unit 5
Discriminant Analysis refers to a statistical technique that may determine group membership
based on a collection of metric predictors that are independent variables. The primary function of
this technique is to assign each observation to a particular group or category according to the
A technique for classifying data, discriminant analysis works with responses to questions posed
in the form of variables and other factors that serve as predictors. It is also used to find the
contribution of every parameter in dividing the groups. Identifying one or more linear
combinations of the variables that have been chosen is how discriminant analysis does its work.
analysis.
The model is made up of a discriminant function or, for more than two groups, a set of
If there are more than two groups, the model will consist of discriminant functions.
(sample) and to discover the impact of each parameter in dividing the groups.
groups may be examined using a linear or quadratic function for assigning each individual to
existing groups. This can be done by determining which group each individual belongs to.
analysis. The method comprises a discriminant function (or, for more than two groups, a set of
discriminant functions) that is premised on linear combinations of the predictor variables that
offer the best discrimination between the groups. If there are more than two groups, the model
will consist of discriminant functions. After the functions have been constructed using a sample
of instances for which the group membership is known, they may be applied to fresh cases that
contain measurements for the predictor variables but whose group membership is unknown.
Assumptions
The variables used as predictors should have a multivariate normal distribution, and the
It is presumable that cases cannot correspond to more than one group since group
membership is considered mutually exclusive (that is, no case belongs to more than one
using linear regression to take advantage of the richer information offered by the constant
variable. The procedure is most effective when group membership is a truly categorical
variable.
Types
Linear and quadratic discriminant analysis are the two varieties of a statistical technique
Often known as LDA, is a supervised approach that attempts to predict the class of the
predicated on the hypothesis that the independent variables have a normal distribution
(continuous and numerical) and that each class has the same variance and covariance. Both
classification and conditionality reduction may be accomplished with the assistance of this
method.
independent variables to predict the class of the dependent variable. The assumption of the
normal distribution is maintained. Even if it does not presume that the classes have an equal
Application
Not only is it possible to solve classification issues using discriminant analysis. It also makes it
Businesses use discriminant analysis as a tool to assist in gleaning meaning from data sets. This
enables enterprises to drive innovative and competitive remedies supporting the consumer
experience, customization, advertising, making predictions, and many other common strategic
purposes.
The human resources function is to evaluate potential candidates' job performance by using
background information to predict how well candidates would perform once employed.
Based on many performance metrics, an industrial facility can forecast when individual machine
The ability to anticipate market trends that will have an impact on new products or services is
Factor Analysis
Factor analysis is used in big data as the data from a large number of variables may be condensed
down into a smaller number of variables. Due to this same reason, it is also frequently referred to
as "dimension reduction." Such dimensions of data can be collapsed into one or more super-
The hidden structure of a group of variables can be uncovered with the use of a factor analysis. It
brings the number of variables in the attribute space down to a more manageable level, making it
a method that is not dependent on any other variables. Principal Component Analysis is the
contain the data from a big number of variables to a more manageable number of
variables.
Since factor analysis helps reduce the variables to work with, some people call it
It can be used across various filed like data mining, machine learning, marketing, etc. It
has useful applicability anywhere data needs to be reduced for further operations.
Two types of factor analysis, namely Principle component analysis, and common factor
After simplifying the situation by minimizing the number of variables, factor analysis can
help. The sheer quantity of variables may become manageable when conducting lengthy
studies that include significant portions of Matrix Likert scale questions. The analysts can
better focus on and understand the results by simplifying the data using factor analysis.
use surveys to ask a number of questions regarding the product in question. These
questions will cover various topics related to the product, such as its features, how easily
it can be purchased, how it can be used, its price, how appealing it looks, and so on. On a
regular basis, they are quantified using numerical scales. On the other hand, a researcher
is looking for the "factors" or already present characteristics that contribute to overall
consumer happiness. Most of these are mental or emotional reactions to the product, and
they cannot be assessed in a straightforward manner. In factor analysis, variables from
Types
When doing factor analysis on a data set, variety of types, including the following can be
used:
It is the methodology used by researchers most of the time. In addition, it takes the factors with
the highest variance and places them in the first factor. After that, it takes out the variation that
can be accounted for by the first component and then isolates the second factor. In addition, this
In terms of popularity among researchers, this method comes in at number two. In addition, it
separates the elements that contribute to the most prevalent variation. This method, which is
utilized in SEM, does not take into account the interpretation of all of the variables.
In order to generate an accurate prediction of the factor in image factoring, it utilizes the OLS
regression approach and is based on the correlation matrix as its foundation. Image analysis is a
typical factor analysis method used to determine the variability of a group of variables.
In addition, it operates on the correlation matrix, but it factors using the maximum likelihood
accomplished by optimizing a likelihood function in such a way that, according to the statistical
model that is being assumed, the observed data has the highest probability.
Applications
Factor analysis has its applications in many fields. Following are a few examples of the
applications.
#1 - Marketing
Marketing promotes products, services, and brands. This statistical technique might aid
marketing factor analysis. Businesses use this analysis to establish the link between marketing
campaign aspects to improve their long-term performance. It also links customer satisfaction to
post-campaign feedback to quantify campaign efficacy and audience impact. Thus, factor
analysis may improve marketing input and consumer happiness, increasing sales.
#2 - Data Mining
Factor Analysis can rival artificial intelligence in data mining. FA simplifies data mining by
filtering out variables that are linked. Data scientists have long struggled to uncover links and
#3 - Machine Learning
Data mining and machine learning go together. Factor Analysis may be a Machine Learning tool
because of this. Machine learning algorithms employ Factor Analysis to minimise the number of
variables in a dataset to get a more accurate and enhanced collection of observable factors. They
are well trained with massive data to make room for additional applications. It is a popular
unsupervised machine learning technique for dimensionality reduction. Machine learning and
Factor Analysis may create data mining methods and speed up data investigation.
conjoint analysis
Conjoint analysis is a survey-based statistical analysis technique used during market research
Already that may sound complicated, so let’s break it down a little more.
Conjoint analysis is a statistical technique that uses a survey to determine consumer preferences
before they make purchase decisions. It asks each respondent a series of questions—also known
as choice tasks—in which they select between a few packaged options based on relative
Each package presents different product features, called attributes, and each attribute shows
Product material
Price
Distance or location
Company/brand features
As they answer each question, respondents determine which trade-off feels like the best deal for
them.
Finally, companies use the data from that survey to determine a utility score, revealing which
attribute(s) respondents find most valuable. Survey data can also be used to measure the
The reason why this method is called conjoint analysis is because survey respondents have to
choose a conjoined product package with different attributes and levels. When developing a
conjoint analysis survey, companies can use a variety of statistical techniques. Each method asks
the respondent a different series of questions that businesses can leverage for different insights.
This is the most prevalent type of conjoint analysis that market researchers use. Choice-based
conjoint analysis (CBC)—or discrete choice conjoint analysis—simulates the market and
demonstrates how respondents value certain attribute levels. Since this method is also most
commonly used to explain how conjoint analysis works, let’s go over an example.
Say you’ve started a car cleaning service and you want to develop a survey that asks customers
You break down your service into attributes and levels. The attributes could be price, services
provided, time spent cleaning, and a manual vs automated car wash. Then you include different
Once your survey is complete and you analyze your conjoint data, you may learn that more
respondents would prefer a car service that’s quick and inexpensive. Or you may learn that
respondents would rather spend the extra $10 for a wax and tire polish—despite the service
Adaptive conjoint analysis (ACA) is similar to CBC analysis. The difference though, is that each
question updates in real time and adapts to each respondent's choices. Adaptive conjoint analysis
is ideal for when respondents need to evaluate more attributes than in a choice-based survey.
This way, companies can get a full perspective of what their customers value and are looking for
by presenting combinations of attributes and levels that companies may not have thought of
before. ACA is also a more efficient way of surveying respondents because each follow-up
question becomes more curated to each input answer, which makes the survey feel more relevant
This conjoint analysis technique requires a full description of each product in a choice task.
Other market research techniques usually limit the number of attributes, but with full-profile
conjoint analysis, the respondent is able to see a thorough description with every attribute.
Respondents then select which product they’d purchase with maximum likelihood.
Typically, a conjoint survey doesn’t ask respondents outright what they’d like to pay for a
product or what features they’d like to see with it. Menu-based conjoint analysis surveys differ
because they enable the respondent to package a product by themselves. This allows companies
to see how potential customers may value certain combinations of attributes and levels.
Of course, everyone would like to have the best-quality product that’s inexpensive, comes with
all sorts of benefits and features, or takes the least amount of time to finish or arrive. However,
attribute and level so they can customize a packaged product that they feel would deliver the
most value.
Companies often conduct conjoint analysis surveys because they are one of the best survey
methods for determining customer values and preferences during the buying process. Let’s go
over the business benefits of conjoint analysis and why it’s so effective.
When companies know which product features are the most valuable to consumers, they can
highlight them in their advertisements. Say, for example, you learn that one respondent group
values your brand’s environmental mission and another group values the quality of your
materials. With data from your conjoint study, you can target some consumers with ads that
highlight your stance on climate change while other ads target consumers that are looking for a
Companies can also use a conjoint analysis experiment to determine which new product features
to add or take away based on survey data, utility scores, and preference scores. If you learn that
most respondents preferred an old feature compared to a potential new one, you could save time,
money, and resources that would be spent launching new products with multiple features that
People make trade-offs every day. However, not all trade-offs are created equal because different
people have different priorities. For example, some people trade sleeping in so they can go to the
gym and grab breakfast to go before starting work; others may prefer to sleep in more and
Conjoint analysis mimics this kind of daily trade-off. For instance, when it comes to purchase
As mentioned, conjoint analysis doesn’t necessarily ask respondents what they specifically prefer
choose which packaged option they prefer, ultimately revealing which attribute level respondents
Conjoint analysis methods allow companies to gain insights on how much a consumer
monetarily values their product or service. By developing conjoint surveys that focus primarily
on product and pricing research, companies can understand how much consumers are willing to
pay.
Businesses can develop surveys that employ a brand price trade-off approach, wherein they learn
if consumers have a bias toward a competitor solely based on a name brand. This allows
companies to simulate a competitive market situation, allowing them to see whether or not
Instead of hoping that a new product, feature, or service will land well with new consumers,
conjoint analysis can help companies make more informed decisions with their marketing
strategies. Companies often use conjoint analysis to forecast potential demand, predict marketing
trends, or determine product acceptance before they launch by noticing trends and quickly acting
on relevant data.
Cluster analysis
Cluster analysis is a multivariate data mining technique whose goal is to groups objects (eg.,
attributes. It is the basic and most important step of data mining and a common technique for
statistical data analysis, and it is used in many fields such as data compression, machine learning,
When plotted geometrically, objects within clusters should be very close together and clusters
reason to choose one cluster method over another.It should be noted that an algorithm that works
on a particular set of data will not work on another set of data. There are a number of different
In this method, first, a cluster is made and then added to another cluster (the most similar and
closest one) to form one single cluster. This process is repeated until all subjects are in one
starts with single objects and starts grouping them into clusters.
The divisive method is another kind of Hierarchical method in which clustering starts with the
Centroid-based Clustering
In this type of clustering, clusters are represented by a central entity, which may or may not be a
part of the given data set. K-Means method of clustering is used in this method, where k are the
cluster centers and objects are assigned to the nearest cluster centres.
Distribution-based Clustering
It is a type of clustering model closely related to statistics based on the modals of distribution.
Objects that belong to the same distribution are put into a single cluster.This type of clustering
can capture some complex properties of objects like correlation and dependence between
attributes.
Density-based Clustering
In this type of clustering, clusters are defined by the areas of density that are higher than the
remaining of the data set. Objects in sparse areas are usually required to separate clusters.The
objects in these sparse points are usually noise and border points in the graph.The most popular
It is the principal job of exploratory data mining, and a common method for statistical data
analysis. It is used in many fields, such as machine learning, image analysis, pattern recognition,
antibacterial action.
Cluster analysis can be a compelling data-mining means for any organization that wants to
recognise discrete groups of customers, sales transactions, or other kinds of behaviours and
things. For example, insurance providing companies use cluster analysis to identify fraudulent
Multidimensional Scaling?
Multidimensional Scaling (MDS) is a statistical tool that helps discover the connections among
objects in lower dimensional space using the canonical similarity or dissimilarity data analysis
technique. The article aims to delve into the fundamentals of multidimensional scaling.
dissimilarity among a set of objects or entities by translating high-dimensional data into a more
comprehensible two- or three-dimensional space. This reduction aims to maintain the inherent
relationships within the data, facilitating easier analysis and interpretation. MDS is particularly
useful in fields such as psychology, sociology, marketing, geography, and biology, where
making it easier to visualize and interpret. The primary goal is to create a spatial
representation where the distances between points accurately reflect their original
similarities or differences.
2. The technique strives to maintain the original proximities between datasets; objects that are
similar are positioned closer together, while dissimilar objects are placed further apart in
3. MDS utilizes advanced optimization algorithms to minimize the discrepancy between the
original high-dimensional distances and the distances in the reduced space. This involves
representation are as close as possible to the actual dissimilarities measured in the original
high-dimensional space.
4. By revealing patterns and relationships in data through a visual framework, MDS assists
researchers and analysts in uncovering meaningful insights about data structure. These
insights are instrumental in crafting strategies across various domains, from cognitive
studies and geographic information analysis to market trend analysis and brand positioning.
dissimilarities between pairs of items and produces a coordinate matrix that minimizes the
strain.
2. Metric Multidimensional Scaling
functions and input matrices with known distances and weights. It minimizes a cost function
dissimilarities and Euclidean distances between items, along with the location of each item in
MDS is the standard approach in psychology to study the human perception, cognition and
It, on the other hand, helps the psychologists to realize the mechanism of the perception of
the similarities or the differences between the stimuli, for example, the words, the images,
or the sounds.
Market research applies MDS to the tasks of brand positioning, product positioning, and
market segmentation.
The marketers employ the MDS to visualize and interpret the consumer perceptions of the
brands, products or services, which is hence they to make the decisions strategically and for
It permits the cartographers to make maps that are true to the actual nature of the
In biology, MDS is mostly applied for phylogenetic analysis, protein structure prediction
species.
MDS is utilized in sociology and the social sciences for the analysis of the social networks,
The sociologists employ the MDS to the survey data, the questionnaire responses or the
Reduces the dimensionality of the original relationships between objects while preserving
the original information, hence, helping to understand the objects better without the loss of
crucial information.
The adaptable nature of the scheme makes it suitable for various disciplines and data types,
It assists in discovering the hidden structures inside the data, thus, revealing the underlying
It helps to the hypothesis testing and the clustering analysis, thus the data-driven decision-
Sensitivity to outliers: The MDS results can be distorted by outliers, which in turn can
matter of subjective decision of the meaning of the spatial arrangements which can result in
dimensions for the reduced space to be identified can be a difficult task and may necessitate
of the experimentation.
Multiple Regression
In our daily lives, we come across variables, which are related to each other. To study the degree
of relationships between these variables, we make use of correlation. To find the nature of the
relationship between the variables, we have another measure, which is known as regression. In
this, we use correlation and regression to find equations such that we can estimate the value of
Multiple regression analysis is a statistical technique that analyzes the relationship between two
or more variables and uses the information to estimate the value of the dependent variables. In
multiple regression, the objective is to develop a model that describes a dependent variable y to
Stepwise regression is a step by step process that begins by developing a regression model with a
single predictor variable and adds and deletes predictor variable one step at a time. Stepwise
multiple regression is the method to determine a regression equation that begins with a single
independent variable and add independent variables one by one. The stepwise multiple
regression method is also known as the forward selection method because we begin with no
independent variables and add one independent variable to the regression equation at each of the
iterations. There is another method called backwards elimination method, which begins with an
entire set of variables and eliminates one independent variable at each of the iterations.
Residual: The variations in the dependent variable explained by the regression model are called
residual or error variation. It is also known as random error or sometimes just “error”. This is a
Only independent variables with non zero regression coefficients are included in the
regression equation.
The changes in the multiple standard errors of estimate and the coefficient of
The stepwise multiple regression is efficient in finding the regression equation with only
Data visualization is the process of taking all of your data reporting and transforming it into a
visual format that makes it much easier to understand and interpret. With data visualizations,
people of all different backgrounds and expertise can understand data in the same way.
For example, you may have a lot of information about your customers in a giant spreadsheet
somewhere. While it is nice that you have collected so much information, you might find it
difficult to interpret. After all, you might just see a bunch of numbers on a spreadsheet and have
makes it easier for you to pull out what is important. This makes it easy for businesses to identify
patterns and errors in the data so they can make more informed decisions.
It's important to visualize your data because it makes it much easier for you to identify the most
important parts of the data. For example, with data visualization software, you can gain in-depth
insight into patterns and trends and pinpoint areas that need improvement.
Regardless of the amount of data you have, it is much easier to pull out the most important parts
of that data if you transform it into a pie chart, graphical format, or some other visual tool. Then,
you can identify other pieces of information that you may have otherwise overlooked if you
So, what is the purpose of data visualization? There are several advantages of using data
One of the biggest advantages of data visualization is that you will have an easier time
understanding the information in front of you. When you simply see numbers on a page, it might
be difficult to grasp just how important the information is and what the data points mean.
However, if you transform the data into a visual format, you will have an easier time
understanding the most important components. Then, you can use the information you gather
to make data-driven decisions, which can set your company up for success.
By turning data into a visual format, you may be able to extract insights that you would have
otherwise overlooked.
Improved decision making
Another advantage of data visualization is that you'll have an easier time understanding
your audience insights, which can make it easier for you to make the right decisions for your
company.
Just because you have accumulated a tremendous amount of information doesn't necessarily
mean you know what to do with it. Instead, you need to understand what the information means
By visualizing data, you can analyze important information related to your customers, allowing
Figuring out what's important about your data is one thing, but conveying that to someone else is
something entirely different. You can probably talk for hours about the importance of your data,
but how do you know that your audience will understand you?
Fortunately, data visualization can help with presenting data in a way that is easy for anyone to
understand.
For example, you might use tools to analyze website performance, but how are you going to
explain to someone else what the data means? With data visualization, you can actually show
your audience the importance of the information you have collected. Data visualization makes it
Increased efficiency
Finally, data visualizations can make it much easier for you to increase efficiency throughout the
company. Think about how long it takes to go through a spreadsheet of information by hand, let
visually communicating it. It’ll be easier for someone to understand data if they look at a chart or
With data visualizations, anyone can interpret and understand data, not just data scientists.
Are you ready to bring your data to life through visual representation? If so, you may be curious
Bar charts
If you are showing segments of information that fall into different categories, you should
consider using a bar chart. A vertical bar chart is helpful when comparing different categories,
such as age groups or product classes. A horizontal bar chart is ideal if you’re showing
Bar graphs are great for showing differences in orders of magnitude between categories and
classes.
Line Charts
If you need a tool to help you show changes over time, you should use a horizontal line graph or
chart. This type of graph is beneficial for identifying areas of resistance in your data set.
Even though you can use a bar chart to show changes over time, a line chart is better if there are
relatively small changes over shorter periods because it will be much easier for people to
Pie Charts
If you have information that you can divide into different segments of the whole, you should use
a pie chart. The slices in the pie chart will represent different percentages of the total value, and
with categorical data, you can make each category a different color.
Pie charts make it easy to see if there is one category in the data set that is dominating the others,
and you can divide a pie chart into as many categories as are required.
Scatter Plots
If you have two variables that pair well together, you may want to use a scatter plot. You can
plot the two variables on a scatter diagram, which makes it easier for people to see their
Heat Maps
One of the most powerful personalized marketing tools is heat maps. A heat map is great for
providing your department with an analysis of how people interact with your website.
Essentially, a heat map will show where people spend most of their time on your website. What
pages are doing the best? What images do people look at the longest? You can show all of this
A heat map easily shows people where and how visitors interact with your website, so you can
Geographic Maps
If geography is important for your business, you may want to use a geographic map.
Geographic maps show population density so you can determine where most of your website
visitors are coming from. Then, you can figure out why your visitors are coming from a specific
To present data in a compelling and effective manner, it is essential to follow a few best
Label the axes: If you develop a graph or a chart, you need to label the axes and
categories appropriately. That makes it easy for people to understand what each category
is.
Use legends: Do you have category names that are too large to fit on the page? If so, you
need to use legends. Legends are important for letting people know what the graph is
about.
Contrast colors: Do not use colors that are too similar. If you have a handful of
categories, make sure you choose colors that are easy to tell apart.
Enlarge the image: You may know what the data is about because you have made the
chart, but you need to enlarge the image to make it easier for people to see.
Do not clutter: If you have a lot of information to share, consider creating multiple charts.
Do not clutter the image, as this will make it difficult for people to understand the graph.
making complex data easier to understand. But before you can create visual representations of
your data, you need to track and record your data, which you can do with Mailchimp.
If you are looking for the best tools to track data quickly, Mailchimp can help. Mailchimp has a
variety of marketing analytics and reporting tools that make it easy to monitor trends, track
Then, once you have that information, you can use Mailchimp to turn the data into a visual
Mailchimp can transform how you look at data and help you better understand trends and
patterns. Take a look at the tools and resources that Mailchimp offers, and visualize your data to
Forecasting:
Forecasting is a planning tool by which historical data is used to predict the direction of future
trends.
How Forecasting Works
Today, forecasting blends data analysis, machine learning, statistical modeling, and expert
judgment. Forecasting provides benchmarks for firms, which need a long-term perspective of
operations. For example, much of the derivatives market in options and futures trading is an
outgrowth of business and investor forecasting, all to hedge or insure businesses against adverse
Forecasting in Investing
Equity analysts use forecasting to predict how trends, such as gross domestic product (GDP) or
unemployment, will change in the coming quarter or year. Statisticians employ forecasting to
analyze the potential impact of a change in business operations. Analysts then derive earnings
estimates that are often aggregated into a consensus number. If actual earnings announcements
miss the estimates, it can have a large impact on a company’s stock price.3
Forecasting in Business
conditions through qualitative and quantitative measures discussed below, companies aim to
These predictions guide critical choices ranging from market entry strategies and product
development to supply chain management and workforce planning, and so the task is often to
The consequences of getting a forecast wrong can be far-reaching. Correct predictions allow
businesses to improve how they divide their resources, whether they can capitalize on emerging
prospects, and mitigate risks. Conversely, inaccurate forecasts can lead to misaligned strategies,
inefficient use of resources, missed opportunities, and risks that weren't managed or insured for.
Market strategy: Accurate projections of consumer demand and market trends inform
and the constraints on both is crucial for maintaining smooth operations and controlling
costs.
Human resources: Workforce planning relies heavily on forecasts for future business
The consequences of poor forecasting are often severe.5 Companies may find themselves
ARIMA
uses time series data to either better understand the data set or to predict future trends.
A statistical model is autoregressive if it predicts future values based on past values. For
example, an ARIMA model might seek to predict a stock's future prices based on its past
An autoregressive integrated moving average model is a form of regression analysis that gauges
the strength of one dependent variable relative to other changing variables. The model's goal
is to predict future securities or financial market moves by examining the differences between
Autoregression (AR): refers to a model that shows a changing variable that regresses on
Integrated (I): represents the differencing of raw observations to allow the time series
to become stationary (i.e., data values are replaced by the difference between the data
To begin building an ARIMA model for an investment, you download as much of the price data
as you can. Once you've identified the trends for the data, you identify the lowest order of
negative, the series is already differenced. You may need to difference the series more if the
Next, determine the order of regression (p) and order of moving average (q) by comparing
autocorrelations and partial autocorrelations. Once you have the information you need, you can
ARIMA models have strong points and are good at forecasting based on past circumstances, but
there are more reasons to be cautious when using ARIMA. In stark contrast to investing
disclaimers that state "past performance is not an indicator of future performance...," ARIMA
models assume that past values have some residual effect on current or future values and use
The following table lists other ARIMA traits that demonstrate good and bad characteristics.
Pros
Cons
Computationally expensive
ARIMA is a method for forecasting or predicting future outcomes based on a historical time
series. It is based on the statistical concept of serial correlation, where past data points influence
What Are the Differences Between Autoregressive and Moving Average Models?
autoregressive process, for instance, is one in which the current value is based on the
immediately preceding value, while an AR(2) process is one in which the current value is based
on the previous two values. A moving average is a calculation used to analyze data points by
creating a series of averages of different subsets of the full data set to smooth out the influence
of outliers. As a result of this combination of techniques, ARIMA models can take into account
trends, cycles, seasonality, and other non-static types of data when making forecasts.
How Does ARIMA Forecasting Work?
ARIMA forecasting is achieved by plugging in time series data for the variable of interest.
Statistical software will identify the appropriate number of lags or amount of differencing to be
applied to the data and check for stationarity. It will then output the results, which are often
The ARIMA model is used as a forecasting tool to predict how something will act in the future
performance.
ARIMA modeling is generally inadequate for long-term forecastings, such as more than six
months ahead, because it uses past data and parameters that are influenced by human thinking.
For this reason, it is best used with other technical analysis tools to get a clearer picture of an
asset's performance.