slidesSmartPLS4 PDF
slidesSmartPLS4 PDF
Workshop
Theory and Practice
By
Dalowar Hossan
dalowarhossan.bd@gmail.com
1
Research Title
2
Research Framework & Hypothesis
3
Research Hypothesis
4
Use PLS-SEM when
✓ The goal is predicting key target constructs or identifying key “driver” constructs.
✓ Formatively measured constructs are part of the structural model. Note that
formative measures can also be used with CB-SEM, but doing so requires to
construct specification modifications (e.g., the construct must include both
formative and reflective indicators to meet identification requirements).
✓ The structural model is complex (many constructs and many indicators).
✓ The sample size is small and/or the data are nonnormally distributed.
✓ The plan is to use latent variable scores in subsequent analyses.
5
THEORY
6
Fundamentals of PLS-SEM
7
Fundamentals of PLS-SEM
dalowarhossan.bd@gmail.com
8
Fundamentals of PLS-SEM
(Latent & Observed Variables)
dalowarhossan.bd@gmail.com
9
Formative Measurement Model
dalowarhossan.bd@gmail.com
10
Reflective Measurement Model
11
12
Reflective VS Formative Measurement Model
13
Formative VS Reflective Measurement Model
14
Formative VS Reflective Measurement Model
15
Structural Model
16
Key Considerations Prior to Data Analysis
17
Key Considerations Prior to Data Analysis
Sampling Technique
18
Key Considerations Prior to Data Analysis
Sampling Technique
19
Key Considerations Prior to Data Analysis
(Sample Size)
20
Key Considerations Prior to Data Analysis
Pre-test & Pilot-test
21
Key Considerations Prior to Data Analysis
Common Method Variance
22
Key Considerations Prior to Data Analysis
Reverse Coding
23
Key Considerations Prior to Data Analysis
EFA & CFA
dalowarhossan.bd@gmail.com
24
Key Considerations Prior to Data Analysis
Missing Value Imputation
25
Basic PLS Modelling
26
Measurement Model Assessment
27
Assessment of Measurement Model
28
Reflective Model Assessment
29
Reflective Model Assessment
30
Reflective Model Assessment
dalowarhossan.bd@gmail.com
31
Convergent Validity: Factor Loadings
32
Convergent Validity: AVE
33
Reflective Model Assessment
34
Discriminant Validity: Fornell & Larcker (F&L)
35
Discriminant Validity: Fornell & Larcker (F&L)
36
Discriminant Validity: Fornell & Larcker (F&L)
37
Discriminant Validity: HTMT
38
Discriminant Validity: HTMT Ratio
39
Reporting HTMT Output Ratio
40
Higher Order Constructs
41
Higher Order Constructs
42
Higher Order Constructs
43
Mediation Analysis
44
Mediation Analysis
45
Moderation
46
Moderation
47
Moderation
48
Preparation
i) Missing values in SPSS need to be addressed using EM.
ii) Save SPSS file as csv, SPSS, XL format.
iii) Remove variables/data which are not needed in file.
49
Getting Started with PLS
50
Path Modelling (Reflective indicators)
51
Path Modelling (Reflective indicators
52
Mediation Analysis
i) Assessment of measurement model (PLS algorithm)
53
Moderator Analysis
(categorical data, which requires dummy coding)
54
Moderator Analysis
(categorical data, which requires dummy coding)
iii) Add moderating effect, identify moderator and IV, select product
indicator and unstandardized product term
56
Reliability
Name of Index Level of Acceptance Literature Support
Cronbach Alpha > 0.70 or, 0.6 Robinson, Shaver & Wrightsman (1991)
Internal Consistency
Reliability
57
Convergent Validity
Name of Index Level of Acceptance Literature Support
Average Variance Explained AVE score > 0.5 Hair et al (2010), Hair et al
(AVE) (2014), Fornell and Larcker, 1981
Loadings for indicators > 0.50, Hair et al. (2007), Hair et al (2014)
0.4 to 0.7 may also be retained (if
Factor Loadings after removing indicator every time
the AVE and composite reliability
does not increase
58
Discriminant Validity
Name of Index Level of Acceptance Literature Support
Fornell and Larcker criterion The square root of AVE of each construct Fornell and Larcker criterion (1981)
should be higher than its highest
correlation with any other constructs
60
Indices Structural Model
1. Path Co-efficient
2. R2
3. F2
4. Collinearity
5. Q2
61
Indices for Structural Model
Path Co-efficient
Path Co-efficient
t value > 1.96 Hair et al., (2014)
62
Indices for Structural Model
R2
0.75 – Substantial
0.25 - Weak
63
Indices for Structural Model
f2 ( f Square)
64
Indices for Structural Model
Colliniarity
65
Indices for Structural Model
Q2 (Q Square)
dalowarhossan.bd@gmail.com
66
ASSESSMENT OF INTERNAL CONSISTENCY AND CONVERGENT VALIDITY
FROM XL SHEET
67
ASSESSMENT OF DISCRIMINANT VALIDITY
FROM XL SHEET
68
ASSESSMENT OF DISCRIMINANT VALIDITY
FROM XL SHEET
69
ASSESSMENT OF DISCRIMINANT VALIDITY
FROM XL SHEET
70
ASSESSMENT OF STRUCTURAL MODEL ANALYSIS
FROM XL SHEET
71
ASSESSMENT OF STRUCTURAL MODEL ANALYSIS
FROM XL SHEET
72
ASSESSMENT OF STRUCTURAL MODEL ANALYSIS
FROM XL SHEET
73
PRACTICE
74
Data Outliers
75
76
77
78
79
80
81
82
83
84
85
86
Z score ± 3.29
87
88
89
90
91
92
93
Structural equation modeling is a multivariate statistical analysis technique that is used to analyze structural
relationships. This technique is a combination of factor analysis and multiple regression analysis, and it is used to
analyze the structural relationship between measured variables and latent constructs. This method is preferred by the
researcher because it estimates the multiple and interrelated dependence in a single analysis. Structural equation
modeling (SEM) is a powerful statistical technique that establishes measurement models and structural models. On the
other hand, multiple regression (MR) is considered a sophisticated and well-developed modeling approach to data
analysis with a history of more than 100 years.
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
Inserted from SmartPLS 3
131
132
133
134
135
136
137
138
139
140
141
142
143
144
PLS-SEM MULTI-GROUP ANALYSIS
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
A regression model determines a
relationship between an independent
variable and a dependent variable, by
providing a function. Formulating a
regression analysis helps us to predict
the effects of the independent variable
on the dependent one.
The objective of multiple regression analysis is to use the independent variables whose values are known to
predict the single dependent value selected by the researcher. Each independent variable is weighted by the
regression analysis procedure to ensure maximal prediction from the set of independent variables.
I. Linear
II. Non-linear
III. Multiple
160
A linear regression model is used to depict a relationship between variables that are proportional to
each other. Meaning, that the dependent variable increases/decreases with the independent variable.
The graphical representation has a straight linear line plotted between the variables. Even if the points
are not exactly in a straight line (which is always the case) we can still see a pattern and make sense of
it.
161
In the non-linear regression model, the graph doesn’t show a linear progression. Depending on how the
response variable reacts to the input variable, the line will rise or fall showing the height or depth of the
effect of the response variable.
To know that a non-linear regression model is the best fit for your scenario, make sure you look into your
variables and their patterns. If you see that the response variable is showing not-so-constant output to the
input variable, you can choose to use a non-linear model for your problem.
162
A multiple regression model is used when there is more than one independent variable affecting a
dependent variable. While predicting the outcome variable, it is important to measure how each of the
independent variables moves in their environment and how their changes will affect the output or target
variable.
163
164
165
166
167
168
If the p-value is less than 0.05 and the t-value is higher than
1.96 then the effect is significant at confidence level of 95%.
Unstandardized beta represents the slope of the line between the predictor variable and the dependent variable. So for Variable 1,
every one unit increase in Variable LS1, the dependent variable decreases by 0.044 units. A standardized beta coefficient compares
the strength of the effect of each individual independent variable to the dependent variable. The higher the absolute value of the beta
coefficient, the stronger the effect. The range from 0 to 1 or 0 to -1, depending on the direction of the relationship.
169
The ANOVA tells us whether our regression model explains a statistically significant proportion of the variance. F and P say
“do the independent variables reliably predict the dependent variable?”
P value < alpha value (typically 0.05)
170
171
172
The coefficient of determination (R square) 0.556 suggests that 55.6% variance in the dependent variable
can be explained by independent variables.
1.5<d<2.5 (no first-order linear auto-correlation in the multiple linear regression data)
173
VIF < 10
When IVs are correlated, there are problems in estimating regression coefficients. Collinearity
means that within the set of IVs, some of the IVs are (nearly) totally predicted by the other IVs.
174
If one or more of the
eigenvalues are small
(close to zero) and the
corresponding condition
number is large,
then we have an
indication of
multicollinearity.
How to Deal with Collinearity (we have many options for it for example, )
➢ Factor analyze IVs to find sets of relatively homogeneous IVs that we can combine (add together).
➢ Use another type of analysis (path analysis, SEM).
175
For normally distributed data, observations should lie approximately in a straight line. If the data is non-normal, the
points form a curve
176
All residue values are arranged according to the frequency of normal distribution
177
Autocorrelation occurs when the residuals are not independent of each other. A random pattern of residuals
indicates the non-presence of autocorrelation
178
179
The PROCESS macro is essentially a modification to statistical programs that computes regression analyses containing
various combinations of mediators, moderators, and covariates.
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201