0% found this document useful (0 votes)
24 views39 pages

DMW Unit4

Uploaded by

Akshay Rathod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views39 pages

DMW Unit4

Uploaded by

Akshay Rathod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 39

Market Basket Analysis in Data Mining

A data mining technique that is used to uncover purchase


patterns in any retail setting is known as Market Basket
Analysis. Basically, market basket analysis in data mining
involves analyzing the combinations of products that are
bought together.
This is a technique that gives the careful study of purchases
done by a customer in a supermarket. This concept identifies
the pattern of frequent purchase items by customers. This
analysis can help to promote deals, offers, sale by the
companies, and data mining techniques helps to achieve this
analysis task. Example:
 Data mining concepts are in use for Sales and marketing to
provide better customer service, to improve cross-selling
opportunities, to increase direct mail response rates.
 Customer Retention in the form of pattern identification
and prediction of likely defections is possible by Data
mining.
 Risk Assessment and Fraud area also use the data-mining
concept for identifying inappropriate or unusual behavior
etc.
Market basket analysis mainly works with the ASSOCIATION
RULE {IF} -> {THEN}.
 IF means Antecedent: An antecedent is an item found
within the data
 THEN means Consequent: A consequent is an item found
in combination with the antecedent.

Let’s see ASSOCIATION RULE {IF} -> {THEN} rules used in Market
Basket Analysis in Data Mining. For example, customers buying
a domain means they definitely need extra plugins/extensions
to make it easier for the users.
Like we said above Antecedent is the item sets that are
available in data. By formulating from the rules
means {if} component and from the example is the domain.
Same as Consequent is the item that is found with the
combination of Antecedents. By formulating from the rules
means {THEN} component and from the example is extra
plugins/extensions.
With the help of these, we are able to predict customer
behavioral patterns. From this, we are able to make certain
combinations with offers that customers will probably buy
those products. That will automatically increase the sales and
revenue of the company.
With the help of the Apriori Algorithm, we can further classify
and simplify the item sets which are frequently bought by the
consumer.
There are three components in APRIORI ALGORITHM:
 SUPPORT
 CONFIDENCE
 LIFT
Now take an example, suppose 5000 transactions have been
made through a popular eCommerce website. Now they want
to calculate the support, confidence, and lift for the two
products, let’s say pen and notebook for example out of 5000
transactions, 500 transactions for pen, 700 transactions for
notebook, and 1000 transactions for both.
SUPPORT: It is been calculated with the number of transactions
divided by the total number of transactions made,
Support=freq(A,B)/NSupport=freq(A,B)/N
support(pen) = transactions related to pen/total transactions
i.e support -> 500/5000=10 percent
CONFIDENCE: It is been calculated for whether the product
sales are popular on individual sales or through combined sales.
That is calculated with combined transactions/individual
transactions.
Confidence=freq(A,B)/freq(A)Confidence=freq(A,B)/freq(A)
Confidence = combine transactions/individual transactions
i.e confidence-> 1000/500=20 percent
LIFT: Lift is calculated for knowing the ratio for the sales.
Lift=confidencepercent/
supportpercentLift=confidencepercent/supportpercent
Lift-> 20/10=2
When the Lift value is below 1 means the combination is not so
frequently bought by consumers. But in this case, it shows that
the probability of buying both the things together is high when
compared to the transaction for the individual items sold.
With this, we come to an overall view of the Market Basket
Analysis in Data Mining and how to calculate the sales for
combination products.
Types of Market Basket Analysis
There are three types of Market Basket Analysis. They are as
follow:
1. Descriptive market basket analysis: This sort of analysis
looks for patterns and connections in the data that exist
between the components of a market basket. This kind of
study is mostly used to understand consumer behavior,
including what products are purchased in combination and
what the most typical item combinations. Retailers can
place products in their stores more profitably by
understanding which products are frequently bought
together with the aid of descriptive market basket
analysis.
2. Predictive Market Basket Analysis: Market basket analysis
that predicts future purchases based on past purchasing
patterns is known as predictive market basket analysis.
Large volumes of data are analyzed using machine learning
algorithms in this sort of analysis in order to create
predictions about which products are most likely to be
bought together in the future. Retailers may make data-
driven decisions about which products to carry, how to
price them, and how to optimize shop layouts with the use
of predictive market basket research.
3. Differential Market Basket Analysis: Differential market
basket analysis analyses two sets of market basket data to
identify variations between them. Comparing the behavior
of various client segments or the behavior of customers
over time is a common usage for this kind of study.
Retailers can respond to shifting consumer behavior by
modifying their marketing and sales tactics with the help
of differential market basket analysis.
Benefits of Market Basket Analysis
1. Enhanced Customer Understanding: Market basket
research offers insights into customer behavior, including
what products they buy together and which products they
buy the most frequently. Retailers can use this information
to better understand their customers and make informed
decisions.
2. Improved Inventory Management: By examining market
basket data, retailers can determine which products are
sluggish sellers and which ones are commonly bought
together. Retailers can use this information to make well-
informed choices about what products to stock and how to
manage their inventory most effectively.
3. Better Pricing Strategies: A better understanding of the
connection between product prices and consumer
behavior might help merchants develop better pricing
strategies. Using this knowledge, pricing plans that boost
sales and profitability can be created.
4. Sales Growth: Market basket analysis can assist businesses
in determining which products are most frequently bought
together and where they should be positioned in the store
to grow sales. Retailers may boost revenue and enhance
customer shopping experiences by improving store layouts
and product positioning.
Applications of Market Basket Analysis
1. Retail: Market basket research is frequently used in the
retail sector to examine consumer buying patterns and
inform decisions about product placement, inventory
management, and pricing tactics. Retailers can utilize
market basket research to identify which items are sluggish
sellers and which ones are commonly bought together, and
then modify their inventory management strategy
accordingly.
2. E-commerce: Market basket analysis can help online
merchants better understand the customer buying habits
and make data-driven decisions about product
recommendations and targeted advertising campaigns.
The behaviour of visitors to a website can be examined
using market basket analysis to pinpoint problem areas.
3. Finance: Market basket analysis can be used to evaluate
investor behaviour and forecast the types of investment
items that investors will likely buy in the future. The
performance of investment portfolios can be enhanced by
using this information to create tailored investment
strategies.
4. Telecommunications: To evaluate consumer behaviour and
make data-driven decisions about which goods and
services to provide, the telecommunications business
might employ market basket analysis. The usage of this
data can enhance client happiness and the shopping
experience.
5. Manufacturing: To evaluate consumer behaviour and
make data-driven decisions about which products to
produce and which materials to employ in the production
process, the manufacturing sector might use market
basket analysis. Utilizing this knowledge will increase
effectiveness and cut costs.
1. Frequent item sets, also known as association rules, are a
fundamental concept in association rule mining, which is a
technique used in data mining to discover relationships
between items in a dataset. The goal of association rule
mining is to identify relationships between items in a
dataset that occur frequently together.
2. A frequent item set is a set of items that occur together
frequently in a dataset. The frequency of an item set is
measured by the support count, which is the number of
transactions or records in the dataset that contain the item
set. For example, if a dataset contains 100 transactions and
the item set {milk, bread} appears in 20 of those
transactions, the support count for {milk, bread} is 20.
3. Association rule mining algorithms, such as Apriori or FP-
Growth, are used to find frequent item sets and generate
association rules. These algorithms work by iteratively
generating candidate item sets and pruning those that do
not meet the minimum support threshold. Once the
frequent item sets are found, association rules can be
generated by using the concept of confidence, which is the
ratio of the number of transactions that contain the item
set and the number of transactions that contain the
antecedent (left-hand side) of the rule.
4. Frequent item sets and association rules can be used for a
variety of tasks such as market basket analysis, cross-
selling and recommendation systems. However, it should
be noted that association rule mining can generate a large
number of rules, many of which may be irrelevant or
uninteresting. Therefore, it is important to use appropriate
measures such as lift and conviction to evaluate the
interestingness of the generated rules.
Association Mining searches for frequent items in the data set.
In frequent mining usually, interesting associations and
correlations between item sets in transactional and relational
databases are found. In short, Frequent Mining shows which
items appear together in a transaction or relationship.
Need of Association Mining: Frequent mining is the generation
of association rules from a Transactional Dataset. If there are 2
items X and Y purchased frequently then it’s good to put them
together in stores or provide some discount offer on one item
on purchase of another item. This can really increase sales. For
example, it is likely to find that if a customer
buys Milk and bread he/she also buys Butter. So the association
rule is [‘milk]^[‘bread’]=>[‘butter’]. So the seller can suggest
the customer buy butter if he/she buys Milk and Bread.
Important Definitions :
 Support : It is one of the measures of interestingness. This
tells about the usefulness and certainty of rules. 5%
Support means total 5% of transactions in the database
follow the rule.
Support(A -> B) = Support_count(A ∪ B)
 Confidence: A confidence of 60% means that 60% of the
customers who purchased a milk and bread also bought
butter.
Confidence(A -> B) = Support_count(A ∪ B) / Support_count(A)
If a rule satisfies both minimum support and minimum
confidence, it is a strong rule.
 Support_count(X): Number of transactions in which X
appears. If X is A union B then it is the number of
transactions in which A and B both are present.
 Maximal Itemset: An itemset is maximal frequent if none
of its supersets are frequent.
 Closed Itemset: An itemset is closed if none of its
immediate supersets have same support count same as
Itemset.
 K- Itemset: Itemset which contains K items is a K-itemset.
So it can be said that an itemset is frequent if the
corresponding support count is greater than the minimum
support count.
Example On finding Frequent Itemsets – Consider the given
dataset with given transactions.

 Lets say minimum support count is 3


 Relation hold is maximal frequent => closed => frequent
1-frequent: {A} = 3; // not closed due to {A, C} and not maximal
{B} = 4; // not closed due to {B, D} and no maximal {C} = 4; //
not closed due to {C, D} not maximal {D} = 5; // closed item-set
since not immediate super-set has same count. Not maximal
2-frequent: {A, B} = 2 // not frequent because support count <
minimum support count so ignore {A, C} = 3 // not closed due to
{A, C, D} {A, D} = 3 // not closed due to {A, C, D} {B, C} = 3 // not
closed due to {B, C, D} {B, D} = 4 // closed but not maximal due
to {B, C, D} {C, D} = 4 // closed but not maximal due to {B, C, D}
3-frequent: {A, B, C} = 2 // ignore not frequent because support
count < minimum support count {A, B, D} = 2 // ignore not
frequent because support count < minimum support count {A, C,
D} = 3 // maximal frequent {B, C, D} = 3 // maximal frequent
4-frequent: {A, B, C, D} = 2 //ignore not frequent </
ADVANTAGES OR DISADVANTAGES:
Advantages of using frequent item sets and association rule
mining include:
1. Efficient discovery of patterns: Association rule mining
algorithms are efficient at discovering patterns in large
datasets, making them useful for tasks such as market
basket analysis and recommendation systems.
2. Easy to interpret: The results of association rule mining are
easy to understand and interpret, making it possible to
explain the patterns found in the data.
3. Can be used in a wide range of applications: Association
rule mining can be used in a wide range of applications
such as retail, finance, and healthcare, which can help to
improve decision-making and increase revenue.
4. Handling large datasets: These algorithms can handle large
datasets with many items and transactions, which makes
them suitable for big-data scenarios.
Disadvantages of using frequent item sets and association rule
mining include:
1. Large number of generated rules: Association rule mining
can generate a large number of rules, many of which may
be irrelevant or uninteresting, which can make it difficult
to identify the most important patterns.
2. Limited in detecting complex relationships: Association
rule mining is limited in its ability to detect complex
relationships between items, and it only considers the co-
occurrence of items in the same transaction.
3. Can be computationally expensive: As the number of items
and transactions increases, the number of candidate item
sets also increases, which can make the algorithm
computationally expensive.
4. Need to define the minimum support and confidence
threshold: The minimum support and confidence threshold
must be set before the association rule mining process,
which can be difficult and requires a good understanding
of the data.
5. Association Rule
Association rule mining finds interesting associations and
relationships among large sets of data items. This rule shows
how frequently a itemset occurs in a transaction. A typical
example is a Market Based Analysis. Market Based Analysis is
one of the key techniques used by large relations to show
associations between items. It allows retailers to identify
relationships between the items that people buy together
frequently. Given a set of transactions, we can find rules that
will predict the occurrence of an item based on the occurrences
of other items in the transaction.
TID Items

1 Bread, Milk

2 Bread, Diaper, Beer, Eggs

3 Milk, Diaper, Beer, Coke

4 Bread, Milk, Diaper, Beer

5 Bread, Milk, Diaper, Coke

Before we start defining the rule, let us first see the basic
definitions. Support Count( ) – Frequency of occurrence of a
itemset.
Here ({Milk, Bread, Diaper})=2
Frequent Itemset – An itemset whose support is greater than or
equal to minsup threshold. Association Rule – An implication
expression of the form X -> Y, where X and Y are any 2 itemsets.
Example: {Milk, Diaper}->{Beer}
Rule Evaluation Metrics –
 Support(s) – The number of transactions that include
items in the {X} and {Y} parts of the rule as a percentage of
the total number of transaction.It is a measure of how
frequently the collection of items occur together as a
percentage of all transactions.
 Support = (X+Y) total – It is interpreted as fraction of
transactions that contain both X and Y.
 Confidence(c) – It is the ratio of the no of transactions that
includes all items in {B} as well as the no of transactions
that includes all items in {A} to the no of transactions that
includes all items in {A}.
 Conf(X=>Y) = Supp(X Y) Supp(X) – It measures how
often each item in Y appears in transactions that contains
items in X also.
 Lift(l) – The lift of the rule X=>Y is the confidence of the
rule divided by the expected confidence, assuming that the
itemsets X and Y are independent of each other.The
expected confidence is the confidence divided by the
frequency of {Y}.
 Lift(X=>Y) = Conf(X=>Y) Supp(Y) – Lift value near 1
indicates X and Y almost often appear together as
expected, greater than 1 means they appear together
more than expected and less than 1 means they appear
less than expected.Greater lift values indicate stronger
association.
Example – From the above table, {Milk, Diaper}=>{Beer}
s= ({Milk, Diaper, Beer}) |T|
= 2/5
= 0.4

c= (Milk, Diaper, Beer) (Milk, Diaper)


= 2/3
= 0.67

l= Supp({Milk, Diaper, Beer}) Supp({Milk,


Diaper})*Supp({Beer})
= 0.4/(0.6*0.6)
= 1.11
The Association rule is very useful in analyzing datasets. The
data is collected using bar-code scanners in supermarkets. Such
databases consists of a large number of transaction records
which list all items bought by a customer on a single purchase.
So the manager could know if certain groups of items are
consistently purchased together and use this data for adjusting
store layouts, cross-selling, promotions based on statistics.
Apriori Algorithm
Apriori algorithm is given by R. Agrawal and R. Srikant in 1994
for finding frequent itemsets in a dataset for boolean
association rule. Name of the algorithm is Apriori because it
uses prior knowledge of frequent itemset properties. We apply
an iterative approach or level-wise search where k-frequent
itemsets are used to find k+1 itemsets.
To improve the efficiency of level-wise generation of frequent
itemsets, an important property is used called Apriori
property which helps by reducing the search space.
Apriori Property –
All non-empty subset of frequent itemset must be frequent.
The key concept of Apriori algorithm is its anti-monotonicity of
support measure. Apriori assumes that
All subsets of a frequent itemset must be frequent(Apriori
property).
If an itemset is infrequent, all its supersets will be infrequent.
Before we start understanding the algorithm, go through some
definitions which are explained in my previous post.
Consider the following dataset and we will find frequent
itemsets and generate association rules for them.
minimum support count is 2
minimum confidence is 60%
Step-1: K=1
(I) Create a table containing support count of each item present
in dataset – Called C1(candidate set)

(II) compare candidate set item’s support count with minimum


support count(here min_support=2 if support_count of
candidate set items is less than min_support then remove those
items). This gives us itemset L1.
Step-2: K=2
 Generate candidate set C2 using L1 (this is called join step).
Condition of joining Lk-1 and Lk-1 is that it should have (K-2)
elements in common.
 Check all subsets of an itemset are frequent or not and if
not frequent remove that itemset.(Example subset of{I1,
I2} are {I1}, {I2} they are frequent.Check for each itemset)
 Now find support count of these itemsets by searching in
dataset.

(II) compare candidate (C2) support count with minimum


support count(here min_support=2 if support_count of
candidate set item is less than min_support then remove those
items) this gives us itemset L2.

Step-3:
o Generate candidate set C3 using L2 (join step).
Condition of joining Lk-1 and Lk-1 is that it should have
(K-2) elements in common. So here, for L2, first
element should match.
So itemset generated by joining L2 is {I1, I2, I3}{I1, I2,
I5}{I1, I3, i5}{I2, I3, I4}{I2, I4, I5}{I2, I3, I5}
o Check if all subsets of these itemsets are frequent or
not and if not, then remove that itemset.(Here subset
of {I1, I2, I3} are {I1, I2},{I2, I3},{I1, I3} which are
frequent. For {I2, I3, I4}, subset {I3, I4} is not frequent
so remove it. Similarly check for every itemset)
o find support count of these remaining itemset by
searching in dataset.
(II) Compare candidate (C3) support count with minimum
support count(here min_support=2 if support_count of
candidate set item is less than min_support then remove those
items) this gives us itemset L3.

Step-4:
o Generate candidate set C4 using L3 (join step).
Condition of joining Lk-1 and Lk-1 (K=4) is that, they
should have (K-2) elements in common. So here, for
L3, first 2 elements (items) should match.
o Check all subsets of these itemsets are frequent or
not (Here itemset formed by joining L3 is {I1, I2, I3, I5}
so its subset contains {I1, I3, I5}, which is not
frequent). So no itemset in C4
o We stop here because no frequent itemsets are found
further

Thus, we have discovered all the frequent item-sets. Now


generation of strong association rule comes into picture. For
that we need to calculate confidence of each rule.
Confidence –
A confidence of 60% means that 60% of the customers, who
purchased milk and bread also bought butter.
Confidence(A->B)=Support_count(A∪B)/Support_count(A)
So here, by taking an example of any frequent itemset, we will
show the rule generation.
Itemset {I1, I2, I3} //from L3
SO rules can be
[I1^I2]=>[I3] //confidence = sup(I1^I2^I3)/sup(I1^I2) =
2/4*100=50%
[I1^I3]=>[I2] //confidence = sup(I1^I2^I3)/sup(I1^I3) =
2/4*100=50%
[I2^I3]=>[I1] //confidence = sup(I1^I2^I3)/sup(I2^I3) =
2/4*100=50%
[I1]=>[I2^I3] //confidence = sup(I1^I2^I3)/sup(I1) =
2/6*100=33%
[I2]=>[I1^I3] //confidence = sup(I1^I2^I3)/sup(I2) =
2/7*100=28%
[I3]=>[I1^I2] //confidence = sup(I1^I2^I3)/sup(I3) =
2/6*100=33%
So if minimum confidence is 50%, then first 3 rules can be
considered as strong association rules.
How can we further improve the efficiency of Apriori-
There are some variations of the Apriori algorithm that have
been projected that target developing the efficiency of the
original algorithm which are as follows −
The hash-based technique (hashing itemsets into
corresponding buckets) − A hash-based technique can be used
to decrease the size of the candidate k-itemsets, Ck, for k > 1.
For instance, when scanning each transaction in the database to
create the frequent 1-itemsets,L1, from the candidate 1-
itemsets in C1, it can make some 2-itemsets for each
transaction, hash (i.e., map) them into the several buckets of a
hash table structure, and increase the equivalent bucket counts.
Transaction reduction − A transaction that does not include
some frequent k-itemsets cannot include some frequent (k + 1)-
itemsets. Thus, such a transaction can be marked or deleted
from further consideration because subsequent scans of the
database for j-itemsets, where j > k, will not need it.
Partitioning − A partitioning technique can be used that needed
two database scans to mine the frequent itemsets. It includes
two phases involving In Phase I, the algorithm subdivides the
transactions of D into n non-overlapping partitions. If the
minimum support threshold for transactions in D is min_sup,
therefore the minimum support count for a partition is min_sup
× the number of transactions in that partition.
For each partition, all frequent itemsets within the partition are
discovered. These are defined as local frequent itemsets. The
process employs a specific data structure that, for each itemset,
records the TIDs of the transactions including the items in the
itemset. This enables it to find all of the local frequent k-
itemsets, for k = 1, 2... in only one scan of the database.
A local frequent itemset can or cannot be frequently related to
the whole database, D. Any itemset that is possibly frequent
related D must appear as a frequent itemset is partially one of
the partitions. Thus, all local frequent itemsets are candidate
itemsets slightly D. The set of frequent itemsets from all
partitions forms the worldwise candidate itemsets for D. In
Phase II, the second scan of D is organized in which the actual
support of each candidate is assessed to decide the global
frequent itemsets.
Sampling − The fundamental idea of the sampling approach is
to select a random sample S of the given data D, and then
search for frequent itemsets in S rather than D. In this method,
it can trade off some degree of accuracy against efficiency. The
sample size of S is such that the search for frequent itemsets in
S can be completed in main memory, and therefore only one
scan of the transactions in S is needed overall.
Frequent Pattern Growth Algorithm
The two primary drawbacks of the Apriori Algorithm are:
1. At each step, candidate sets have to be built.
2. To build the candidate sets, the algorithm has to
repeatedly scan the database.
These two properties inevitably make the algorithm slower. To
overcome these redundant steps, a new association-rule mining
algorithm was developed named Frequent Pattern Growth
Algorithm. It overcomes the disadvantages of the Apriori
algorithm by storing all the transactions in a Trie Data Structure.
Consider the following data:-
Transaction IDItemsT1{E,K,M,N,O,Y}T2{D,E,K,N,O,Y}T3{A,E,K,M}T
4{C,K,M,U,Y}T5{C,E,I,K,O,O}Transaction IDT1T2T3T4T5
Items{E,K,M,N,O,Y}{D,E,K,N,O,Y}{A,E,K,M}{C,K,M,U,Y}
{C,E,I,K,O,O}
The above-given data is a hypothetical dataset of transactions
with each letter representing an item. The frequency of each
individual item is computed:-
Item Frequency A1C2D1E4I1K5M3N2O4U1Y3 Item ACDEIKMN
OUY Frequency 12141532413

Let the minimum support be 3. A Frequent Pattern set is built


which will contain all the elements whose frequency is greater
than or equal to the minimum support. These elements are
stored in descending order of their respective frequencies. After
insertion of the relevant items, the set L looks like this:-

L = {K : 5, E : 4, M : 3, O : 4, Y : 3}

Now, for each transaction, the respective Ordered-Item set is


built. It is done by iterating the Frequent Pattern set and
checking if the current item is contained in the transaction in
question. If the current item is contained, the item is inserted in
the Ordered-Item set for the current transaction. The following
table is built for all the transactions:
Transaction IDItemsOrdered-Item SetT1{E,K,M,N,O,Y}
{K,E,M,O,Y}T2{D,E,K,N,O,Y}{K,E,O,Y}T3{A,E,K,M}
{K,E,M}T4{C,K,M,U,Y}{K,M,Y}T5{C,E,I,K,O,O}{K,E,O}Transaction I
DT1T2T3T4T5Items{E,K,M,N,O,Y}{D,E,K,N,O,Y}{A,E,K,M}
{C,K,M,U,Y}{C,E,I,K,O,O}Ordered-Item Set{K,E,M,O,Y}{K,E,O,Y}
{K,E,M}{K,M,Y}{K,E,O}
Now, all the Ordered-Item sets are inserted into a Trie Data
Structure.

a) Inserting the set {K, E, M, O, Y}:

Here, all the items are simply linked one after the other in the
order of occurrence in the set and initialize the support count
for each item as 1.
b) Inserting the set {K, E, O, Y}:

Till the insertion of the elements K and E, simply the support


count is increased by 1. On inserting O we can see that there is
no direct link between E and O, therefore a new node for the
item O is initialized with the support count as 1 and item E is
linked to this new node. On inserting Y, we first initialize a new
node for the item Y with support count as 1 and link the new
node of O with the new node of Y.
c) Inserting the set {K, E, M}:

Here simply the support count of each element is increased by


1.
d) Inserting the set {K, M, Y}:

Similar to step b), first the support count of K is increased, then


new nodes for M and Y are initialized and linked accordingly.
e) Inserting the set {K, E, O}:

Here simply the support counts of the respective elements are


increased. Note that the support count of the new node of item
O is increased.
Now, for each item, the Conditional Pattern Base is computed
which is path labels of all the paths which lead to any node of
the given item in the frequent-pattern tree. Note that the items
in the below table are arranged in the ascending order of their
frequencies.
Now for each item, the Conditional Frequent Pattern Tree is
built. It is done by taking the set of elements that is common in
all the paths in the Conditional Pattern Base of that item and
calculating its support count by summing the support counts of
all the paths in the Conditional Pattern Base.

From the Conditional Frequent Pattern tree, the Frequent


Pattern rules are generated by pairing the items of the
Conditional Frequent Pattern Tree set to the corresponding to
the item as given in the below table.

For each row, two types of association rules can be inferred for
example for the first row which contains the element, the rules
K -> Y and Y -> K can be inferred. To determine the valid rule,
the confidence of both the rules is calculated and the one with
confidence greater than or equal to the minimum confidence
value is retained.

Types of Association Rules in Data Mining


Association rule learning is a machine learning technique used
for discovering interesting relationships between variables in
large databases. It is designed to detect strong rules in the
database based on some interesting metrics. For any given
multi-item transaction, association rules aim to obtain rules
that determine how or why certain items are linked. Association
rules are created for finding information about general if-then
patterns using specific criteria with support and trust to define
what the key relationships are. They help to show the frequency
of an item in specific data since confidence is defined by the
number of times an if-then statement is found to be true.
Types of Association Rules:
There are various types of association rules in data mining:-
 Multi-relational association rules
 Generalized association rules
 Quantitative association rules
 Interval information association rules
1. Multi-relational association rules: Multi-Relation Association
Rules (MRAR) is a new class of association rules, different from
original, simple, and even multi-relational association rules
(usually extracted from multi-relational databases), each rule
element consists of one entity but many a relationship. These
relationships represent indirect relationships between entities.
2. Generalized association rules: Generalized association rule
extraction is a powerful tool for getting a rough idea of
interesting patterns hidden in data. However, since patterns are
extracted at each level of abstraction, the mined rule sets may
be too large to be used effectively for decision-making.
Therefore, in order to discover valuable and interesting
knowledge, post-processing steps are often required.
Generalized association rules should have categorical (nominal
or discrete) properties on both the left and right sides of the
rule.
3. Quantitative association rules: Quantitative association rules
is a special type of association rule. Unlike general association
rules, where both left and right sides of the rule should be
categorical (nominal or discrete) attributes, at least one
attribute (left or right) of quantitative association rules must
contain numeric attributes
Uses of Association Rules
Some of the uses of association rules in different fields are
given below:
 Medical Diagnosis: Association rules in medical diagnosis
can be used to help doctors cure patients. As all of us know
that diagnosis is not an easy thing, and there are many
errors that can lead to unreliable end results. Using the
multi-relational association rule, we can determine the
probability of disease occurrence associated with various
factors and symptoms.
 Market Basket Analysis: It is one of the most popular
examples and uses of association rule mining. Big retailers
typically use this technique to determine the association
between items.
Multilevel Association Rule :
Association rules created from mining information at different
degrees of reflection are called various level or staggered
association rules.
Multilevel association rules can be mined effectively utilizing
idea progressions under a help certainty system.
Rules at a high idea level may add to good judgment while rules
at a low idea level may not be valuable consistently.
Utilizing uniform least help for all levels :
 At the point when a uniform least help edge is utilized, the
pursuit system is rearranged.
 The technique is likewise straightforward, in that clients
are needed to indicate just a single least help edge.
 A similar least help edge is utilized when mining at each
degree of deliberation. (for example for mining from “PC”
down to “PC”). Both “PC” and “PC” discovered to be
incessant, while “PC” isn’t.
Needs of Multidimensional Rule :
 Sometimes at the low data level, data does not show any
significant pattern but there is useful information hiding
behind it.
 The aim is to find the hidden information in or between
levels of abstraction.
Approaches to multilevel association rule mining :
1. Uniform Support(Using uniform minimum support for all
level)
2. Reduced Support (Using reduced minimum support at
lower levels)
3. Group-based Support(Using item or group based support)
Let’s discuss one by one.
1. Uniform Support –
At the point when a uniform least help edge is used, the
search methodology is simplified. The technique is likewise
basic in that clients are needed to determine just a single
least help threshold. An advancement technique can be
adopted, based on the information that a progenitor is a
superset of its descendant. the search keeps away from
analyzing item sets containing anything that doesn’t have
minimum support. The uniform support approach
however has some difficulties. It is unlikely that items at
lower levels of abstraction will occur as frequently as those
at higher levels of abstraction. If the minimum support
threshold is set too high it could miss several meaningful
associations occurring at low abstraction levels. This
provides the motivation for the following approach.
2. Reduce Support –
For mining various level relationship with diminished
support, there are various elective hunt techniques as
follows.
 Level-by-Level independence –
This is a full-broadness search, where no foundation
information on regular item sets is utilized for
pruning. Each hub is examined, regardless of whether
its parent hub is discovered to be incessant.
 Level – cross-separating by single thing –
A thing at the I level is inspected if and just if its
parent hub at the (I-1) level is regular .all in all, we
research a more explicit relationship from a more
broad one. If a hub is frequent, its kids will be
examined; otherwise, its descendant is pruned from
the inquiry.
 Level-cross separating by – K-itemset –
A-itemset at the I level is inspected if and just if it’s
For mining various level relationship with diminished
support, there are various elective hunt techniques.
 Level-by-Level independence –
This is a full-broadness search, where no foundation
information on regular item sets is utilized for
pruning. Each hub is examined, regardless of whether
its parent hub is discovered to be incessant.
 Level – cross-separating by single thing –
A thing at the 1st level is inspected if and just if its
parent hub at the (I-1) the level is regular .all in all, we
research a more explicit relationship from a more
broad one. If a hub is frequent, its kids will be
examined otherwise, its descendant is pruned from
the inquiry.
 Level-cross separating by – K-item set –
A-item set at the I level is inspected if and just if its
corresponding parents A item set (i-1) level is
frequent.
3. Group-based support –
The group-wise threshold value for support and
confidence is input by the user or expert. The group is
selected based on a product price or item set because
often expert has insight as to which groups are more
important than others.
Example –
For e.g. Experts are interested in purchase patterns of
laptops or clothes in the non and electronic category.
Therefore low support threshold is set for this group to
give attention to these items’ purchase patterns.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy