Printed-Prof - Sachin Guta-ASM - Lecture - 9 To 16 Combined
Printed-Prof - Sachin Guta-ASM - Lecture - 9 To 16 Combined
Lecture 9
Cluster Analysis- Part 1
BITS Pilani By Sachin Gupta
Pilani|Dubai|Goa|Hyderabad
1
Cluster Analysis
Impact of Multicollinearity
24
Cluster Analysis
Impact of Multicollinearity
Excel
SPSS
40
Path Analysis
• Specification
• Identification
• Estimation
• Evaluation
• Re-specification
• Interpretation
β;
Standardized
path
coefficient
β; Standardized
path coefficient
β;
Standardized
path
coefficient
β;
Standardized
path
coefficient
β; Standardized
path coefficient
-------------------------------------------------------
----------
------------------------------
--------------------------------------------------
--------------
NOTE: It is worth mentioning that not
all recommendations necessarily
make sense!
Mastery goals is a positive and significant predictor of interest (b=.770, s.e.=.085, p<.001;
β=.607).
Mastery goals is a negative and significant predictor of anxiety (b=-.399, s.e.=.094, p<.001;
β=-.337).
The covariance between mastery goals and performance goals is -6.2537 (p=.008). [The
correlation is -.229.]
The covariance between the disturbances for interest and anxiety is .0185 (p=.920). [The
correlation between disturbances was .008.]
Pituch, K. A., & Stevens, J. P. (2016). Applied multivariate statistics for the social sciences (6th
edition). Routledge: New York.
Schumacker, R. E., & Lomax, R. G. (2016). A beginner’s guide to structural equation modeling
(4th edition). Routledge: New York.
76
What is Structural equation
Modeling?
Structural equation modeling (SEM) is a family of statistical models
that seeks to explain the relationships among multiple variables.
Specifying relationships
Establishing causation
88
Interpretation of Coefficient in
various type of regression
Y = b0 + b1 x
Y = b0 + b1 * log x
Log y = b0 + b1 x
Log y = b0 + b1 * log x
Why STATA?
108
Neural Network
10 billion neurons
60 trillions connections
• Non Linearity:
• Input/ output Mapping:
Continuous learning method, as the student,
teacher case
• Adaptive: Free Parameters
• Evidential Response: Decision with a measure of
confidence
• Fault tolerance: what if some of the neuron is not working?
• VLSI Implementation, Neurobiological terminology
Hidden Layers
Number of Hidden Layers.
Number of units in hidden layers
Activation Function
Hyperbolic tangent. This function has the form:
γ(c) = tanh(c) = (e c−e −c)/(e c+e −c).
It takes real-valued arguments and transforms them to the
range (–1, 1).
Sigmoid. This function has the form: γ(c) = 1/(1+e −c). It
takes real-valued arguments and transforms them to the
range (0, 1).
Activation Function. The activation function "links" the weighted sums of units in a
layer to the values of
units in the succeeding layer.
Identity. This function has the form: γ(c) = c. It takes real-valued arguments
and returns them
unchanged.
Softmax. This function has the form: γ(c k) = exp(c k)/Σjexp(c j). It takes a
vector of real-valued
arguments and transforms it to a vector whose elements fall in the range (0, 1) and
sum to 1. Softmax
is available only if all dependent variables are categorical.
Hyperbolic tangent. This function has the form: γ(c) = tanh(c) = (e c−e −c)/(e
c+e −c). It takes real-valued arguments and transforms them to the range (–1,
1).
Sigmoid. This function has the form: γ(c) = 1/(1+e −c). It takes real-valued
arguments and transforms them to the range (0, 1).
Gradient descent.
This method must be used with online or mini-batch
training; it can also be used with batch training.
129
What is MANOVA
Both ANOVA and MANOVA are particularly useful when used in conjunction with
experimental designs. That is, research designs in which the researcher directly controls or
manipulates one or more independent variables to determine the effect on the dependent
variable(s)