0% found this document useful (0 votes)
75 views72 pages

Lecture Notes For Chapter 5 Introduction To Data Mining: by Tan, Steinbach, Kumar

This document discusses rule-based classifiers for data mining. It describes how rule-based classifiers work by using if-then rules to classify records. It also covers techniques for generating rules like sequential covering and evaluating rule quality.

Uploaded by

Abu Kafsha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views72 pages

Lecture Notes For Chapter 5 Introduction To Data Mining: by Tan, Steinbach, Kumar

This document discusses rule-based classifiers for data mining. It describes how rule-based classifiers work by using if-then rules to classify records. It also covers techniques for generating rules like sequential covering and evaluating rule quality.

Uploaded by

Abu Kafsha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

Data Mining

Classification: Alternative Techniques

Lecture Notes for Chapter 5

Introduction to Data Mining


by
Tan, Steinbach, Kumar

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1


Rule-Based Classifier

● Classify records by using a collection of


“if…then…” rules
● Rule: (Condition) → y
– where
◆ Condition is a conjunctions of attributes
◆ y is the class label
– LHS: rule antecedent or condition
– RHS: rule consequent
– Examples of classification rules:
◆ (Blood Type=Warm) ∧ (Lay Eggs=Yes) → Birds
◆ (Taxable Income < 50K) ∧ (Refund=Yes) → Evade=No
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 2
Rule-based Classifier (Example)

R1: (Give Birth = no) ∧ (Can Fly = yes) → Birds


R2: (Give Birth = no) ∧ (Live in Water = yes) → Fishes
R3: (Give Birth = yes) ∧ (Blood Type = warm) →
Mammals
R4: (Give Birth = no) ∧ (Can Fly = no) → Reptiles
R5:Kumar
© Tan,Steinbach, (Live in Water = tosometimes)
Introduction Data Mining → Amphibians
4/18/2004 3
Application of Rule-Based Classifier

● A rule r covers an instance x if the attributes of


the instance satisfy the condition of the rule
R1: (Give Birth = no) ∧ (Can Fly = yes) → Birds
R2: (Give Birth = no) ∧ (Live in Water = yes) → Fishes
R3: (Give Birth = yes) ∧ (Blood Type = warm) → Mammals
R4: (Give Birth = no) ∧ (Can Fly = no) → Reptiles
R5: (Live in Water = sometimes) → Amphibians

The rule R1 covers a hawk => Bird


The rule R3 covers the grizzly bear => Mammal

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 4


Rule Coverage and Accuracy

● Coverage of a rule:
– Fraction of records
that satisfy the
antecedent of a rule
● Accuracy of a rule:
– Fraction of records
that satisfy both
the antecedent and
consequent of a
rule (Status=Single) → No
Coverage = 40%, Accuracy =
50%
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 5
How does Rule-based Classifier Work?

R1: (Give Birth = no) ∧ (Can Fly = yes) → Birds


R2: (Give Birth = no) ∧ (Live in Water = yes) → Fishes
R3: (Give Birth = yes) ∧ (Blood Type = warm) → Mammals
R4: (Give Birth = no) ∧ (Can Fly = no) → Reptiles
R5: (Live in Water = sometimes) → Amphibians

A lemur triggers rule R3, so it is classified as a mammal


A turtle triggers both R4 and R5
A dogfish shark triggers none of the rules

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 6


Characteristics of Rule-Based Classifier

● Mutually exclusive rules


– Classifier contains mutually exclusive rules if
the rules are independent of each other
– Every record is covered by at most one rule

● Exhaustive rules
– Classifier has exhaustive coverage if it
accounts for every possible combination of
attribute values
– Each record is covered by at least one rule
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 7
From Decision Trees To Rules

Rules are mutually exclusive and exhaustive


Rule set contains as much information as
the tree

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 8


Rules Can Be Simplified

Initial Rule: (Refund=No) ∧ (Status=Married) → No


Simplified Rule: (Status=Married) → No
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 9
Effect of Rule Simplification

● Rules are no longer mutually exclusive


– A record may trigger more than one rule
– Solution?
◆ Ordered rule set
◆ Unordered rule set – use voting schemes

● Rules are no longer exhaustive


– A record may not trigger any rules
– Solution?
◆ Use a default class
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 10
Ordered Rule Set

● Rules are rank ordered according to their priority


– An ordered rule set is known as a decision list
● When a test record is presented to the classifier
– It is assigned to the class label of the highest ranked rule it has
triggered
– If none of the rules fired, it is assigned to the default class

R1: (Give Birth = no) ∧ (Can Fly = yes) → Birds


R2: (Give Birth = no) ∧ (Live in Water = yes) → Fishes
R3: (Give Birth = yes) ∧ (Blood Type = warm) → Mammals
R4: (Give Birth = no) ∧ (Can Fly = no) → Reptiles
R5: (Live in Water = sometimes) → Amphibians

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 11


Rule Ordering Schemes

● Rule-based ordering
– Individual rules are ranked based on their quality
● Class-based ordering
– Rules that belong to the same class appear together

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 12


Building Classification Rules

● Direct Method:
◆ Extract rules directly from data
◆ e.g.: RIPPER, CN2, Holte’s 1R

● Indirect Method:
◆ Extract rules from other classification models (e.g.
decision trees, neural networks, etc).
◆ e.g: C4.5rules

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 13


Direct Method: Sequential Covering

1. Start from an empty rule


2. Grow a rule using the Learn-One-Rule function
3. Remove training records covered by the rule
4. Repeat Step (2) and (3) until stopping criterion
is met

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 14


Example of Sequential Covering

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 15


Example of Sequential Covering…

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 16


Aspects of Sequential Covering

● Rule Growing

● Instance Elimination

● Rule Evaluation

● Stopping Criterion

● Rule Pruning

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 17


Rule Growing

● Two common strategies

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 18


Rule Growing (Examples)

● CN2 Algorithm:
– Start from an empty conjunct: {}
– Add conjuncts that minimizes the entropy measure: {A}, {A,B}, …
– Determine the rule consequent by taking majority class of instances
covered by the rule
● RIPPER Algorithm:
– Start from an empty rule: {} => class
– Add conjuncts that maximizes FOIL’s information gain measure:
◆ R0: {} => class (initial rule)
◆ R1: {A} => class (rule after adding conjunct)
◆ Gain(R0, R1) = t [ log (p1/(p1+n1)) – log (p0/(p0 + n0)) ]
◆ where t: number of positive instances covered by both R0 and R1
p0: number of positive instances covered by R0
n0: number of negative instances covered by R0
p1: number of positive instances covered by R1
n1: number of negative instances covered by R1
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 19
Instance Elimination

● Why do we need to
eliminate instances?
– Otherwise, the next rule is
identical to previous rule
● Why do we remove
positive instances?
– Ensure that the next rule is
different
● Why do we remove
negative instances?
– Prevent underestimating
accuracy of rule
– Compare rules R2 and R3
in the diagram

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 20


Rule Evaluation

● Metrics:
– Accuracy

n : Number of instances
– Laplace covered by rule
nc : Number of instances
covered by rule
k : Number of classes
– M-estimate p : Prior probability

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 21


Stopping Criterion and Rule Pruning

● Stopping criterion
– Compute the gain
– If gain is not significant, discard the new rule

● Rule Pruning
– Similar to post-pruning of decision trees
– Reduced Error Pruning:
◆ Remove one of the conjuncts in the rule
◆ Compare error rate on validation set before and
after pruning
◆ If error improves, prune the conjunct
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 22
Summary of Direct Method

● Grow a single rule

● Remove Instances from rule

● Prune the rule (if necessary)

● Add rule to Current Rule Set

● Repeat

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 23


Direct Method: RIPPER

● For 2-class problem, choose one of the classes as


positive class, and the other as negative class
– Learn rules for positive class
– Negative class will be default class
● For multi-class problem
– Order the classes according to increasing class
prevalence (fraction of instances that belong to a
particular class)
– Learn the rule set for smallest class first, treat the
rest as negative class
– Repeat with next smallest class as positive class

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 24


Direct Method: RIPPER

● Growing a rule:
– Start from empty rule
– Add conjuncts as long as they improve FOIL’s
information gain
– Stop when rule no longer covers negative examples
– Prune the rule immediately using incremental
reduced error pruning
– Measure for pruning: v = (p-n)/(p+n)
◆ p: number of positive examples covered by the rule in
the validation set
◆ n: number of negative examples covered by the rule in
the validation set
– Pruning method: delete any final sequence of
conditions that maximizes v
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 25
Direct Method: RIPPER

● Building a Rule Set:


– Use sequential covering algorithm
◆ Finds the best rule that covers the current set of
positive examples
◆ Eliminate both positive and negative examples
covered by the rule
– Each time a rule is added to the rule set,
compute the new description length
◆ stop adding new rules when the new description
length is d bits longer than the smallest description
length obtained so far

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 26


Direct Method: RIPPER

● Optimize the rule set:


– For each rule r in the rule set R
◆ Consider 2 alternative rules:
– Replacement rule (r*): grow new rule from scratch
– Revised rule(r’): add conjuncts to extend the rule r
◆ Compare the rule set for r against the rule set for r*
and r’
◆ Choose rule set that minimizes MDL principle
– Repeat rule generation and rule optimization
for the remaining positive examples

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 27


Indirect Methods

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 28


Indirect Method: C4.5rules

● Extract rules from an unpruned decision tree


● For each rule, r: A → y,
– consider an alternative rule r’: A’ → y where
A’ is obtained by removing one of the
conjuncts in A
– Compare the pessimistic error rate for r
against all r’s
– Prune if one of the r’s has lower pessimistic
error rate
– Repeat until we can no longer improve
generalization error

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 29


Indirect Method: C4.5rules

● Instead of ordering the rules, order subsets of


rules (class ordering)
– Each subset is a collection of rules with the
same rule consequent (class)
– Compute description length of each subset
◆ Description length = L(error) + g L(model)
◆ g is a parameter that takes into account the
presence of redundant attributes in a rule set
(default value = 0.5)

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 30


Example

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 31


C4.5 versus C4.5rules versus RIPPER

C4.5rules:
(Give Birth=No, Can Fly=Yes) → Birds
(Give Birth=No, Live in Water=Yes) → Fishes
(Give Birth=Yes) → Mammals
(Give Birth=No, Can Fly=No, Live in Water=No) → Reptiles
( ) → Amphibians

RIPPER:
(Live in Water=Yes) → Fishes
(Have Legs=No) → Reptiles
(Give Birth=No, Can Fly=No, Live In
Water=No)
→ Reptiles
(Can Fly=Yes,Give Birth=No) → Birds
() → Mammals

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 32


C4.5 versus C4.5rules versus RIPPER

C4.5 and C4.5rules:

RIPPER:

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 33


Advantages of Rule-Based Classifiers

● As highly expressive as decision trees


● Easy to interpret
● Easy to generate
● Can classify new instances rapidly
● Performance comparable to decision trees

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 34


Instance-Based Classifiers

• Store the training records


• Use training records to
predict the class label of
unseen cases

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 35


Instance Based Classifiers

● Examples:
– Rote-learner
◆ Memorizes entire training data and performs
classification only if attributes of record match one of
the training examples exactly

– Nearest neighbor
◆ Uses k “closest” points (nearest neighbors) for
performing classification

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 36


Nearest Neighbor Classifiers

● Basic idea:
– If it walks like a duck, quacks like a duck, then
it’s probably a duck

Compute
Distance Test
Record

Training Choose k of the


Records “nearest” records

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 37


Nearest-Neighbor Classifiers

● Requires three things


– The set of stored records
– Distance Metric to compute
distance between records
– The value of k, the number
of nearest neighbors to
retrieve

● To classify an unknown record:


– Compute distance to other
training records
– Identify k nearest neighbors
– Use class labels of nearest
neighbors to determine the
class label of unknown
record (e.g., by taking
© Tan,Steinbach, Kumar Introduction to Data Mining majority vote)
4/18/2004 38
Definition of Nearest Neighbor

K-nearest neighbors of a record x are data points


that have the k smallest distance to x
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 39
Nearest Neighbor Classification

● Compute distance between two points:


– Euclidean distance

● Determine the class from nearest neighbor list


– take the majority vote of class labels among
the k-nearest neighbors
– Weigh the vote according to distance
◆ weight factor, w = 1/d2
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 40
Nearest Neighbor Classification…

● Choosing the value of k:


– If k is too small, sensitive to noise points
– If k is too large, neighborhood may include points
from other classes

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 41


Nearest Neighbor Classification…

● Scaling issues
– Attributes may have to be scaled to prevent
distance measures from being dominated by
one of the attributes
– Example:
◆ height of a person may vary from 1.5m to 1.8m
◆ weight of a person may vary from 90lb to 300lb
◆ income of a person may vary from $10K to $1M

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 42


Nearest neighbor Classification…

● k-NN classifiers are lazy learners


– It does not build models explicitly
– Unlike eager learners such as decision tree
induction and rule-based systems
– Classifying unknown records are relatively
expensive

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 43


Bayes Classifier

● A probabilistic framework for solving


classification problems
● Conditional Probability:

● Bayes theorem:

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 44


Example of Bayes Theorem

● Given:
– A doctor knows that meningitis causes stiff neck 50% of the
time
– Prior probability of any patient having meningitis is 1/50,000
– Prior probability of any patient having stiff neck is 1/20

● If a patient has stiff neck, what’s the probability he/


she has meningitis?

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 45


Bayesian Classifiers

● Consider each attribute and class label as random


variables

● Given a record with attributes (A1, A2,…,An)


– Goal is to predict class C
– Specifically, we want to find the value of C that
maximizes P(C| A1, A2,…,An )

● Can we estimate P(C| A1, A2,…,An ) directly from


data?
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 46
Bayesian Classifiers

● Approach:
– compute the posterior probability P(C | A1, A2, …, An)
for all values of C using the Bayes theorem

– Choose value of C that maximizes


P(C | A1, A2, …, An)

– Equivalent to choosing value of C that maximizes


P(A1, A2, …, An|C) P(C)

● How to estimate P(A1, A2, …, An | C )?


© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 47
Naïve Bayes Classifier

● Assume independence among attributes Ai when class is


given:
– P(A1, A2, …, An |C) = P(A1| Cj) P(A2| Cj)… P(An| Cj)

– Can estimate P(Ai| Cj) for all A i and Cj.

– New point is classified to Cj if P(Cj) Π P(Ai| Cj) is


maximal.

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 48


How to Estimate Probabilities from Data?

● Class: P(C) = Nc/N


– e.g., P(No) = 7/10,
P(Yes) = 3/10

● For discrete attributes:


P(Ai | Ck) = |Aik|/ Nc k
– where |Aik| is number of
instances having attribute
Ai and belongs to class Ck
– Examples:
P(Status=Married|No) = 4/7
P(Refund=Yes|Yes)=0

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 49


How to Estimate Probabilities from Data?

● For continuous attributes:


– Discretize the range into bins
◆ one ordinal attribute per bin
◆ violates independence assumption k

– Two-way split: (A < v) or (A > v)


◆ choose only one of the two splits as new attribute
– Probability density estimation:
◆ Assume attribute follows a normal distribution
◆ Use data to estimate parameters of distribution
(e.g., mean and standard deviation)
◆ Once probability distribution is known, can use it to
estimate the conditional probability P(Ai|c)
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 50
How to Estimate Probabilities from Data?

● Normal distribution:

– One for each (Ai,ci) pair

● For (Income, Class=No):


– If Class=No
◆ sample mean = 110
◆ sample variance = 2975

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 51


Example of Naïve Bayes Classifier
Given a Test Record:

● P(X|Class=No) = P(Refund=No|Class=No)
× P(Married| Class=No)
× P(Income=120K| Class=No)
= 4/7 × 4/7 × 0.0072 = 0.0024

● P(X|Class=Yes) = P(Refund=No| Class=Yes)


× P(Married| Class=Yes)
× P(Income=120K| Class=Yes)
= 1 × 0 × 1.2 × 10-9 = 0

Since P(X|No)P(No) > P(X|Yes)P(Yes)


Therefore P(No|X) > P(Yes|X)
=> Class = No
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 52
Naïve Bayes Classifier

● If one of the conditional probability is zero, then


the entire expression becomes zero
● Probability estimation:

c: number of classes
p: prior probability
m: parameter

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 53


Example of Naïve Bayes Classifier

A: attributes
M: mammals
N: non-mammals

P(A|M)P(M) > P(A|N)P(N)


=> Mammals

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 54


Naïve Bayes (Summary)

● Robust to isolated noise points

● Handle missing values by ignoring the instance


during probability estimate calculations

● Robust to irrelevant attributes

● Independence assumption may not hold for


some attributes
– Use other techniques such as Bayesian Belief
Networks (BBN)
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 55
Artificial Neural Networks (ANN)

Output Y is 1 if at least two of the three inputs are equal to 1.

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 56


Artificial Neural Networks (ANN)

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 57


Artificial Neural Networks (ANN)

● Model is an assembly of
inter-connected nodes
and weighted links

● Output node sums up


each of its input value
according to the weights
of its links
Perceptron Model

● Compare output node or


against some threshold t

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 58


General Structure of ANN

Training ANN means learning


the weights of the neurons

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 59


Algorithm for learning ANN

● Initialize the weights (w0, w1, …, wk)

● Adjust the weights in such a way that the output


of ANN is consistent with class labels of training
examples
– Objective function:

– Find the weights wi’s that minimize the above


objective function
◆ e.g., backpropagation algorithm (see lecture notes)

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 60


Ensemble Methods

● Construct a set of classifiers from the training


data

● Predict class label of previously unseen records


by aggregating predictions made by multiple
classifiers

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 61


General Idea

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 62


Why does it work?

● Suppose there are 25 base classifiers


– Each classifier has error rate, ε = 0.35
– Assume classifiers are independent
– Probability that the ensemble classifier makes
a wrong prediction:

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 63


Examples of Ensemble Methods

● How to generate an ensemble of classifiers?


– Bagging

– Boosting

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 64


Bagging

● Sampling with replacement

● Build classifier on each bootstrap sample

● Each sample has probability (1 – 1/n)n of being


selected

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 65


Boosting

● An iterative procedure to adaptively change


distribution of training data by focusing more on
previously misclassified records
– Initially, all N records are assigned equal
weights
– Unlike bagging, weights may change at the
end of boosting round

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 66


Boosting

● Records that are wrongly classified will have


their weights increased
● Records that are classified correctly will have
their weights decreased

• Example 4 is hard to classify


• Its weight is increased, therefore it is more
likely to be chosen again in subsequent rounds

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 67


Example: AdaBoost

● Base classifiers: C1, C2, …, CT

● Error rate:

● Importance of a classifier:

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 68


Example: AdaBoost

● Weight update:

● If any intermediate rounds produce error rate


higher than 50%, the weights are reverted back
to 1/n and the resampling procedure is repeated
● Classification:

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 69


Illustrating AdaBoost

Initial weights for each data point Data points


for training

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 70


Illustrating AdaBoost

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 71


Other issues

● Class Imbalance problem


● Multiclass problem

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 72

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy