0% found this document useful (0 votes)
8 views48 pages

SM_ASM2_Nguyen Quoc Phu

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views48 pages

SM_ASM2_Nguyen Quoc Phu

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

ASSIGNMENT 2 FRONT SHEET

Qualification BTEC HND in Business

Unit number and title Unit 41: Statistics for Management

Submission date 4/8/2024 Date Received 1st submission 4/8/2024

Re-submission Date Date Received 2nd submission

Student Name Nguyen Quoc Phu Student ID BS00708

Class MA06201 Assessor name Luong Thi Kim Ngan

Student declaration

I certify that the assignment submission is entirely my own work and I fully understand the consequences of plagiarism. I understand that
making a false declaration is a form of malpractice.

Student’s signature

Grading grid

P4 P5 P6 M3 M4 D2 D3
 Summative Feedback:  Resubmission Feedback:

Grade: Assessor Signature: Date:


Internal Verifier’s Comments:

Signature & Date:


Table of Contents
Introduction ........................................................................................................................................ 4
I. Apply statistical methods in business planning .................................................................................. 5
1.1 State the definition of variability in business processes and quality management. ................... 5
1.2 The four main measures of variability (range, interquartile range, variance, and standard
deviation). State the definition of each measure and give examples. ............................................. 5
Measures of probability: .................................................................................................................. 9
2.1 State the definition of a probability distribution. ................................................................ 9
2.2 List 3 main types of distribution that we usually have/meet in practice/business management:
normal distribution, Poisson distribution, and Bi-nominal distribution; and then state that you will
discuss normal distribution. .......................................................................................................... 9
Inference: ...................................................................................................................................... 10
3.1 One sample T-test: Estimation and Hypotheses testing. .................................................... 10
3.2 Two sample T-test: Estimation and Hypotheses testing. .................................................... 12
3.3 Measuring the association of variables (from the dataset) by regression technique. .......... 13
Make valid judgments and recommendations for improving business planning through several
applications of the statistical methods above..................................................................................... 19
II. Communicate findings using appropriate charts/tables. ................................................................. 25
Different variables: ........................................................................................................................ 25
State the definition of nominal, ordinal, interval, and ratio levels with examples ........................ 25
Choosing the most effective way of communicating the results of your analysis and variables by
charts/tables. ............................................................................................................................. 29
Create/draw different types of methods for given variables (frequency tables, simple tables, pie
charts, histograms). .................................................................................................................... 32
Explain the advantages and disadvantages of different types of methods for given variables....... 35
Justify the rationale for choosing the method of communication ....................................................... 38
Conclusion ......................................................................................................................................... 45
Referent list....................................................................................................................................... 46
Introduction
In my capacity as the company's research analyst, it is my responsibility to assess and examine pertinent
business data to support the company's initiatives to enhance its information system and decision-
making procedures. The use of statistical techniques in operations and business planning is the report's
selected topic. This report's major goal is to show that I understand different statistical methods and
how they might improve the company's business planning and decision-making. The company is
especially keen to investigate the application of statistical techniques in capacity and inventory
management. Measures of variability, probability distributions, and inferential statistics will also be
covered in the research to offer insights that help guide the company's operational planning and
strategic decisions.
I will go into using statistical techniques in business planning in the first segment. An overview of the
several statistical methods that can be used for operations management and business planning,
including inventory and capacity management, will be included in this. In addition, I will look at
important variability metrics including variance, standard deviation, range, and interquartile range, and
discuss how they relate to business operations and quality control. Furthermore, probability distributions
will be discussed in the first section, with an emphasis on the normal distribution and how it applies to
business procedures and operations. Lastly, I will use regression analysis and other inferential statistics,
such as one- and two-sample t-tests, on the given dataset. Using the proper charts and tables, the
second component of the report will convey the results of the statistical analysis. Nominal, ordinal,
interval, and ratio measurements are among the various levels of measurement that I shall define and
illustrate. In addition, I'll choose and design efficient ways to display the study's findings, like pie charts,
frequency tables, basic tables, and histograms. The report's concluding part will include a summary and
suggestions to help the company's business planning and decision-making procedures. By organizing the
report in this manner, I hope to provide the company with a thorough grasp of the statistical techniques
that it can use to improve its information system and decision-making capabilities, thereby assisting with
its overall business planning and strategy.
Main contents
I. Apply statistical methods in business planning
Statistical methods for business planning: List the statistical methods to areas of business planning and
operations management, including inventory management and capacity management.
Measures of variability:
1.1 State the definition of variability in business processes and quality management.
Definition of variability in business processes
According to Team, E. (2024), variation in business processes refers to uncontrolled or undesirable
differences in the performance and output of a business process, reflecting deviations or inconsistencies
between the ideal process and actual performance in the real world. This variation can occur in terms of
time, resources, results, or quality.
Definition of variability in quality management
The law of variation is a concept used to describe the difference between an ideal situation and an actual
situation. The ideal situation is often defined by standards set by management, based on customer
requirements, government regulations, or expectations from stakeholders such as shareholders and
suppliers. In quality management, variation is often manifested as fluctuations in data, unexpected
results, or inconsistent product quality. It is important to realize that in reality, everything has some
degree of variation, even items that appear identical to the naked eye (Weedmark, D. 2021).
For example, no two cans of soup have the same number of calories; no two tires weigh the same. The
goal of quality control is to keep this variation to a minimum or within acceptable limits.

In short, quality management focuses on controlling change to ensure that products and services are
consistently of high quality and meet customer expectations.
1.2 The four main measures of variability (range, interquartile range, variance, and standard
deviation). State the definition of each measure and give examples.
Range
Range of variation (also known as full range) is an indicator calculated as the difference between the
largest amount of variation and the smallest amount of variation in a series of variables. The larger the
variation range, the greater the level of variation of the indicator. On the contrary, the variation range is
small, and the level of variation of the indicator is low, which means the uniformity of the indicator is
high
Example

Graduation exam scores in Geography – VietNamnet.vn


Lowest score (Min): 0.0
Highest score (Max): 10.0 Range is calculated by the formula: Range=Max−Min so the Range of this
spectrum is 10 point
Interquartile range
The difference between the first and third quartiles is defined by the interquartile range. The partitioned
values known as quantiles split the entire series into four equal sections. Thus, the quartiles are three.
The first quartile, or lower quartile, is represented by the number Q1, the second by the number Q2, and
the third by the number Q3, or upper quartile. Consequently, the upper quartile less the lower quartile is
the interquartile range.
Interquartile Range Formula:
2

Example
I have a dataset with 11 values

In an odd-numbered data set, the median is the number in the middle of the list. The median itself is
excluded from both halves: one half contains all values below the median, and the other half contains all
values above the median.

Q1 is the median of the first half and Q3 is the median of the second half. Since each of these
halves is odd in size, there is only one value in the middle of each half.

Calculate the interquartile range.

Variance
According to information from VietnamBiz (2019) Symbolized as σ2 in statistics. In financial investing, the
return variance of assets in a portfolio is used as a means to best allocate assets. The variance equation,
in financial investment, is a formula to compare the efficiency of components in an investment portfolio
with each other and with the average efficiency value.
Example

Average rainfall per month (mm)- Cucdulichquocgia

January 14 July 295


February 4 August 271
March 9 September 342
April 51 October 260
May 213 November 119
June 309 December 47

Mean: μ=121(14+4+9+51+213+309+295+271+342+260+119+47) =1934/12≈161.17mm


Variance = 16119.57

Standard Deviation = 126.97 mm

Standard divation
Standard Deviation is a measurement in statistics and finance applied to the annual rate of return of
an investment, to shed light on historical fluctuations in the investment. from there. The larger the
standard deviation of a stock, or the larger the variance between the stock price and the average
value, indicates a wider range of price fluctuations (VietnamBiz, 2019).
The formula for calculating standard deviation

Measures of probability:
Probability distributions and application to business operations and processes.
2.1 State the definition of a probability distribution.
According to Hayes, A. (2024), a probability distribution is a statistical function that describes all the
possible values and probabilities that a random variable can take within a certain range. This range will
be limited between the minimum and maximum possible values.
There are many classifications of probability distributions. These include normal, chi-square, binomial,
and Poisson distributions.

2.2 List 3 main types of distribution in practice/business management: normal distribution, Poisson
distribution, and Bi-nominal distribution; and then will discuss normal distribution.
2.2.1 State the definition of a probability distribution

According to Hayes, A. (2024), A probability distribution is a statistical function that describes all the
possible values and probabilities that a random variable can take within a certain range. This range will
be limited between the minimum and maximum possible values. There are many classifications of
probability distributions. These include normal, chi-square, binomial, and Poisson distributions.
2.2.2 Normal distribution
The normal distribution, the most common distribution function for independent variables, is generated
by chance. Its familiar bell-shaped curve appears everywhere in statistical reports, from survey analysis
and quality control to resource allocation.
Normal distribution-Britannica
The normal distribution, the most common distribution function for independent variables, is randomly
generated. Its familiar bell-shaped curve appears everywhere in statistical reports, from survey analysis
and quality control to resource allocation (Britannica, 2024).
Inference:
3.1 One sample T-test: Estimation and Hypotheses testing.
Estimation Definition + Process
According to Stattrek (n.d), estimation is the process of inferring the value of a population parameter
based on information obtained from a sample. In simpler terms, it is about making educated guesses
about the characteristics of a larger group (population) using data from a smaller group (sample).
An estimate of a population parameter can be expressed in two ways:
Point estimate: A point estimate of a population parameter is a single value of a statistic. For example,
the sample mean x is a point estimate of the population mean μ. Similarly, the sample proportion p is a
point estimate of the population proportion P.
Interval estimate: An interval estimate is defined by two numbers between which the population
parameter is said to lie. For example, a < x < b is an interval estimate of the population mean μ. It
indicates that the population mean is greater than a but less than b.

Estimation Definition + Process one sample


Step 1. Define population and parameters:
Identify the population you want to study.
Identify the characteristic you want to estimate. For example, if you are a college student, the parameter
might be your average GPA.
Step 2. Random sampling:
Select a representative subset of the population using random sampling methods to avoid bias.
Step 3. Calculate sample statistics:
Calculate relevant statistics from sample data. For example, calculate the sample mean to estimate the
population mean.
Step 4. Construct a confidence interval (optional):
If desired, calculate a confidence interval to provide a range of plausible values for the population
parameter. This involves determining the margin of error and constructing the interval based on the
sample statistics and the chosen confidence level.
Hypotheses testing + Process
Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is
often used by scientists to test specific predictions, called hypotheses, that arise from theories (Bevans,
R. 2019).
Steps in One-Sample Hypothesis Testing
Step 1: State the null and alternative hypotheses:
Null hypothesis (H₀): This is the statement you want to test. It typically assumes no effect or no
difference.
Alternative hypothesis (H₁): This is the statement you're trying to find evidence for. It contradicts the null
hypothesis.
Step 2: Choose a significance level (α):
This is the probability of rejecting the null hypothesis when it's true (Type I error). Common values are
0.05 or 0.01.
Step 3: Calculate the test statistic
The test statistic measures how far the sample statistic deviates from the hypothesized population
parameter. The choice of test statistic depends on the data distribution and the parameter being tested
(e.g., z-test, t-test, chi-square test).
Step 4: Determine the p-value:
The likelihood of observing a test statistic as extreme or more extreme than the one computed, on the
assumption that null hypothesis 1 is true, is known as the p-value.
Step 5: Make a decision:
Compare the p-value to the significance level (α):
If p-value ≤ α, reject the null hypothesis in favor of the alternative hypothesis.
If p-value > α, fail to reject the null hypothesis.
Step 6 : Interpret the results:
Clearly state your conclusion in the context of the problem.
3.2 Two sample T-test: Estimation and Hypotheses testing.
Estimation
Two-sample estimation is a statistical technique used to compare two independent groups of data. The
main goal of two-sample estimation is to test whether there is a statistically significant difference
between the two groups.
Steps to estimate 2 samples:
Collect data: Collect two independent data sets, each representing a group.
Choose the appropriate statistical test: Choose the t-test for two independent samples or the Z-test
depending on the sample size and standard deviation of the population.
Set up the hypothesis: Set up the null hypothesis (no difference between the two groups) and the
alternative hypothesis (there is a difference between the two groups).
Calculate the test statistic: Calculate the test statistic value based on the sample data.
Compare with the critical value or p-value: Compare the test statistic value with the critical value or
calculate the p-value to make a decision to accept or reject the null hypothesis.
Conclusion: Based on the test results, draw a conclusion about the difference between the two groups.
Hypotheses testing 2 sample and processing
A two-sample hypothesis test is a statistical technique used to compare two independent groups of data.
The main objective of this test is to test whether there is a statistically significant difference between the
two groups.
Steps to perform 2-sample hypothesis testing:
Setp 1 Set the hypothesis:
Null hypothesis (H0): There is no difference between the two groups.
Alternative hypothesis (H1): There is a difference between the two groups.
Choose the significance level (α): The significance level indicates the probability of rejecting the null
hypothesis when it is true. Typically, α = 0.05.
Step 2 Choose the test statistic:
t-test: Used when the data follows a normal distribution and the population standard deviation is
unknown.
Z-test: Used when the data follows a normal distribution and the population standard deviation is
known.
Other tests: Other tests may be used depending on the type of data and hypothesis.
Step 3 Calculate the test statistic: Calculate the test statistic based on the sample data.
Compare to the critical value or p-value:
Critical value: Compare the test statistic with the critical value found from the distribution table.
P-value: Calculate the probability of obtaining a test statistic value greater than or equal to the
calculated value, assuming the null hypothesis is true. If the p-value is less than the significance level α,
we reject the null hypothesis.
Conclusion:
Reject H0: There is enough evidence to conclude that there is a statistically significant difference
between the two groups.
Do not reject H0: There is not enough evidence to conclude that there is a statistically significant
difference between the two groups.
3.3 Measuring the association of variables (from the dataset) by regression technique.
Regression technique definiton
Regression is a statistical method that analyzes the relationship between dependent and independent
variables, allowing predictions through various regression models. Regression is a statistical method used
to analyze the relationship between a dependent variable (target variable) and one or more independent
variables (predictor variables). The goal is to determine the best fitting function that describes the
relationship between these variables (GeeksforGeeks, 2024).
Dataset

Average GDP income of Vietnamese provinces in 2022


The standardized regression equation is rewritten as follows:
Y=84.139+4.9167×10−5X

Graphical representation:

Explanation of Regression Equation:


R Square: 0.061085403
Only about 6.1% of the variation in GRDP per capita is explained by the GRDP of the province. This shows
that the GRDP of the province is not the most important factor determining GRDP per capita.
Significance F: 0.080374159
The F value is greater than 0.05, indicating that this regression model is not statistically significant at the
5% significance level, meaning that the GRDP variable of the province does not have a significant effect
on GRDP per capita income.
Intercept (Intercept Factor): 84.138
This is the predicted GRDP value per capita when the province's GRDP (variable X) is 0. In practice, a
GRDP value of 0 may not be realistic, but the intercept is still meaningful in building the regression
model.
Coefficient (Coefficient of GRDP): 4.9167x10-5
This coefficient shows the change in GRDP per capita income (variable Y) when the GRDP of the province
(variable X) increases by 1 unit. Specifically, if the GRDP of the province increases by 1 billion VND, GRDP
per capita income is expected to increase by 4.9167 VND (because the coefficient is 4.9167 \times 10-5
and 1 billion VND = 10^9 VND).
Conclusion: The regression equation shows a very weak relationship between provincial GRDP and GRDP
per capita. Although there is a small positive effect, most of the variation in GRDP per capita is not
explained by provincial GRDP. This may indicate that there are other factors influencing GRDP per capita
than the overall provincial GRDP index.
Regression: Run multiple regression and Interpret the output.
Multiple Regression is a statistical technique used to determine the relationship between a dependent
variable and two or more independent variables. The multiple regression equation has the form:

Y is the dependent variable


X1, X2, ..., Xn is independent variables
a is intercept Commented [NL1]: Trên phương trình sử dụng “a” không phải
B0, sửa lại cho phù hợp
b1, b2, ..., bn is regression coefficients
ε is a random error term
Multiple regression examines the impact of more than one explanatory variable on some outcome of
interest. It assesses the relative impact of these explanatory or independent variables on the dependent
variable while holding all other variables in the model constant (Adam Hayes, 2024).
Least squares method:

Multiple regression equation: Y = B0 + B1X + B2X1

Y=-565,745+4,631x10x-5+9,867x1
y is the dependent variable
x is the first independent variable
x1 is the second independent variable
-565,745 is the intercept
4,631x10-5 is the coefficient for the first independent variable
9,867 is the coefficient for the second independent variable
R square: 0,360694958
Equation explanation:
R Square: 0,360694958
R Square represents the proportion of the variance of the dependent variable (GRDP per capita) that can
be predicted from the independent variables (provincial GRDP and provincial CPI). In this case, 36.07% of
the variation in GRDP per capita can be explained by provincial GRDP and provincial CPI. This shows a
moderate level of explanation.
Significance F: 2.17268E-05
Significance F indicates whether the overall regression model is statistically significant. A very small value
(less than 0.05) indicates that the model fits the data well, meaning that the independent variables
(provincial GRDP and provincial CPI) significantly predict the dependent variable (per capita GRDP). Here,
the value of 2.17268E-05 is much smaller than 0.05, indicating that the model is statistically significant.
Intercept: -565,7455813
When both provincial GRDP and provincial CPI are zero, i.e. no economic activity is taking place in that
province and prices are unchanged, the model predicts that the province's per capita GRDP will be -
565.7455813. A negative value for per capita GRDP is economically meaningless. This suggests that my
model may not be appropriate when applied to such extreme cases (i.e. when both provincial GRDP and
provincial CPI are zero).
GRDP province (Coefficient: 4,631x10-5)
For every unit increase in provincial GRDP, GRDP per capita increases by 4.631x10-5, keeping provincial
CPI constant. This relationship is statistically significant (p-value = 0.0493), meaning that changes in
provincial GRDP have a significant impact on GRDP per capita.
CPI province (Coefficient 1: 9,867):
For each unit increase in provincial CPI, GRDP per capita increases by 9.867, keeping provincial GRDP
constant. This relationship is statistically significant (p-value = 1.92396E-05), indicating that changes in
provincial CPI have a significant impact on GRDP per capita.
Conclusion: In summary, the regression analysis shows that both provincial GRDP and provincial CPI
significantly affect the per capita GRDP of the provinces. The model explains 36.07% of the variance of
per capita GRDP and the significance F shows that the model is statistically significant. The coefficients
show that the increase in both provincial GRDP and provincial CPI leads to the increase in per capita
GRDP, in which provincial CPI has a larger impact per unit increase than provincial GRDP.
Make valid judgments and recommendations for improving business planning through several
applications of the statistical methods above.
Business planning can be improved through regression analysis. This technique helps businesses identify
strengths and weaknesses, thereby suggesting specific strategies. To make reasonable assessments and
recommendations to improve the business plan, I will use the dataset of Vinfast's stock prices. By
applying some statistical methods, I can analyze this data and make reasonable recommendations.
VinFast Stock Data

Equation explanation:
R Square:
In this result, R Square = 0.1679, which means that about 16.79% of the change in the opening price of a
stock can be explained by the change in the trading volume of the stock. This shows that there are many
factors other than the trading volume of the stock that affect the opening price of a stock, which this
model does not explain.
Significance F:
This value tests whether the overall regression model is statistically significant. If this value is less than
the significance level (usually 0.05), then the overall regression model is significant.
Here, Significance F = 0.00289, which is less than 0.05, indicating that the overall regression model is
statistically significant. This means that the volume of shares traded has a statistically significant
relationship with the opening price of the stock.
Coefficients (Regression coefficient):
Intercept: This coefficient represents the average opening price of a stock when the trading volume is 0.
Here, the intercept coefficient is 4.59604735.
Trading volume: This coefficient shows the average change in the opening price of a stock when the
trading volume changes by one unit. Here, the coefficient is -0.09192e-07, meaning that when the
trading volume increases by one unit, the opening price of the stock will decrease by an average of
0.09192e-07 units, assuming other factors remain unchanged.
Multiple regression:
R Square = 0.96269996
R^2 indicates the percentage of the variation in the opening price of a stock (Y) that is explained by two
independent variables: the adjusted closing price of the stock (X1) and the volume of shares traded (X2).
Interpretation: With R^2 = 0.9627, about 96.27% of the variation in the opening price of a stock can be
explained by the adjusted closing price and the volume of shares traded. This is a very high value,
indicating that the regression model fits the data very well.
Significance F = 5.26045E-35
This value tests whether the overall regression model is statistically significant. If this value is less than
the significance level (usually 0.05), then the overall regression model is significant.
Here, Significance F = 5.26045E-35, which is much smaller than 0.05, which indicates that the overall
regression model is highly statistically significant. In other words, at least one of the independent
variables (adjusted closing price and trading volume) has a significant relationship with the opening price
of the stock.
Coefficients
Regression coefficients and related indicators:
Intercept: 0.40299381
Significance: This coefficient represents the average opening price of a stock when both the adjusted
closing price and the volume of shares traded are zero. The intercept coefficient is less meaningful in
practice because it is unlikely that both independent variables will be zero.
Significance: At 95% confidence, the regression coefficient of the volume of shares traded ranges from -
1.0108E-07 to -4.1163E-08. This range does not contain the value of zero, indicating that the regression
coefficient is highly statistically significant.
Strength
From the results of the univariate and multivariate regressions discussed, it is possible to infer some
strengths of VinFast based on the application of a similar analytical approach to evaluate the company's
performance, including the use of variables such as stock trading volume, adjusted closing price, and
other factors that affect stock price. The multivariate regression shows that a more complex model,
including multiple variables, can explain a large proportion of the variation in the opening price of the
stock. With R Square = 0.9627, VinFast can use this model to more accurately predict its stock price
based on multiple factors such as trading volume, adjusted closing price, and other factors. This allows
the company to forecast and manage its finances more effectively. Even when using only one variable
such as trading volume, the univariate regression model can still provide a certain level of explanation
for the opening price of the stock, showing that VinFast can identify and focus on the key factors that
have a large impact on its stock price. This helps the company optimize its business and investment
strategies. In addition, the Significance F values of both the univariate and multivariate regressions are
statistically significant, demonstrating that factors such as trading volume and adjusted closing price do
indeed affect stock performance. VinFast can be confident that these factors are significant and can rely
on them to adjust its financial and business strategies. Furthermore, the flexibility in data analysis,
through the combination of univariate and multivariate regression, allows VinFast to better understand
the factors affecting stock performance, thereby making data-driven decisions to optimize revenue,
manage risks and improve overall financial performance.
Ultimately, this capability helps VinFast maintain and improve its competitive position in the market
while optimizing its financial and business strategies to achieve sustainable development.
Weaknesses
From the results of the univariate and multivariate regressions discussed, several weaknesses of VinFast
can be inferred based on the application of a similar analytical approach to evaluate the company's
performance. First, the univariate regression shows that only about 16.79% of the volatility of the
opening price of the stock can be explained by the trading volume of the stock. This indicates that there
are many other factors beyond the control of VinFast that affect the company's stock price, including
macro factors such as the economic situation, international market fluctuations, or factors specific to the
automobile industry. Therefore, the company may have difficulty in fully controlling its stock price.
Second, although the multivariate regression can explain a large proportion of the volatility of the
opening price of the stock (R Square = 0.9627), relying on a model with many variables may increase the
risk if one or more of these variables become unstable or difficult to predict. If variables such as trading
volume and adjusted closing price change significantly due to external factors, the reliability of the
predictive model may be reduced. Third, the multivariate regression model may be at risk of overfitting,
i.e. the model fits the current data too well but does not perform well on new data sets. This may lead to
inaccurate data-driven decisions when market conditions change or when the company expands into
new markets with different characteristics. Fourth, although the univariate regression model provides a
certain level of explanation, it still explains only a small part of the volatility of the opening price of the
stock. This shows that relying on only one variable such as trading volume to predict stock prices is not
enough and may lead to inaccurate results if other factors are not considered. Finally, to maintain the
effectiveness of the regression models, VinFast needs to continuously update and adjust the model
based on new data. This requires continuous investment in technology and human resources to analyze
and update the models, and if not done well, the models can become outdated and inaccurate.
Although the results of univariate and multivariate regression provide a lot of useful information for
predicting VinFast's stock price, the company also needs to be aware of these weaknesses so that it can
adjust its business strategy and manage risks more effectively. Understanding these limitations helps
VinFast improve its data analysis models, thereby making more accurate and effective business decisions
in the future.
Recommendations
From the strengths and weaknesses analyzed, reasonable recommendations and assessments can be
made to improve VinFast's planning and strategy. VinFast should invest more heavily in data analysis and
forecasting technology. Using modern analysis tools and artificial intelligence (AI) to continuously update
and improve regression models will help the company predict stock price fluctuations and other
influencing factors more accurately, thereby managing finances more effectively. Instead of relying solely
on variables such as trading volume and adjusted closing price, VinFast should consider other factors
such as the macroeconomic situation, consumer trends, government policies and industry competition.
This will help the company have a more comprehensive view of the factors affecting stock prices,
thereby making more accurate business decisions. In addition, to avoid the risk of overfitting, VinFast
should apply model validation techniques such as cross-validation and splitting the data into training and
testing sets. The company should also regularly re-evaluate its predictive models using new data sets,
helping to ensure that the predictive models are not only relevant to current data but also applicable to
changing market conditions. Fourth, VinFast should establish a process for continuously updating and
adjusting its predictive models, including collecting new data, re-evaluating the current model, and
adjusting as necessary. This process will keep VinFast's predictive models up to date and accurate,
minimizing risks from changing external factors and ensuring high reliability. VinFast should focus on key
factors that have been identified as having a large impact on stock prices, such as trading volume and
adjusted closing price. This includes regular monitoring and analysis of these factors, as well as adjusting
business strategies based on changes detected. Focusing on key factors will help VinFast optimize its
business and investment strategies and make decisions based on reliable data.
Finally, VinFast needs to be more transparent in sharing information with shareholders about the factors
affecting its share price and how the company uses data analytics to predict and manage risks.
Transparency and good communication will help strengthen shareholder confidence, enhance the
company's reputation in the market, and facilitate future fundraising. While the results of the univariate
and multivariate regressions provide a lot of useful information for predicting VinFast's share price, the
company also needs to be aware of these weaknesses so that it can adjust its business strategy and
manage risks more effectively. Understanding these limitations helps VinFast improve data analysis
models, thereby making more accurate and effective business decisions in the future.
II. Communicate findings using appropriate charts/tables.
Different variables:
State the definition of nominal, ordinal, interval, and ratio levels with examples
Nominal
Nominal data is data that is divided into non-overlapping groups or categories, and these groups have no
natural order. This means that the groups cannot be arranged in any order and do not provide
information about quantities. In statistics, “nominal” means “name only.” This indicates that nominal
data includes only the names of the groups to which each observation belongs. Nominal and categorical
data have the same meaning and can be used interchangeably (Jim Frost, n.d).
Example

Public transport, location, and income of residents - Careerfoundry

Ordinal
Ordinal data is a type of data whose values can be ranked or arranged in a natural order, but the
distances between these values are not necessarily equal. This means that you know the order of the
values, but cannot determine the exact distance or difference between them (Bhandari, P. 2023).
Example: This is a question in my survey about Samsung's CSR
These levels can be ranked from highest to lowest interest:
1/Very interested
2/Care
3/Don't care
4/Absolutely no care
This data allows us to see the order of interest of survey respondents. However, it does not provide
information about the degree of variation between the interest levels. For example, there is no way to
know how much "Interested" differs from "Very interested", only that "Interested" is less than "Very
interested".
Interval
Interval data is measured on a numerical scale with equal distances between adjacent values. These
distances are called “intervals”. Interval scales do not have a true zero. This means that a zero point on
an interval scale is not the complete absence of the variable but is simply an arbitrary point (Bhandari, P.
2020). More from Hillier, W. (2021) Interval data is a type of quantitative data, that is, data that is
represented numerically. It groups variables into categories and always uses some kind of ordered scale.
Furthermore, interval values are always ordered and separated using an equal-interval measure.
Example: Let's say I have a list of daily temperatures in Celsius: 20°C, 25°C, 30°C. This is interval data
because:
Thermometer chart during the day
The difference between 20°C and 25°C is 5°C, and the difference between 25°C and 30°C is also 5°C. The
difference between the values is always the same.
Ratio
A ratio scale is a quantitative scale in which there is a true zero and equal intervals between neighboring
points. Unlike on an interval scale, zero on a ratio scale means the complete absence of the variable you
are measuring (Hiller, W. 2023)
Example: The time of each day according to the clock is 12 hours.
Choosing the most effective way of communicating the results of your analysis and variables by
charts/tables.
This is a dataset analyzed by including 50 observations, 6 variables including 5 quantitative variables, and
1 qualitative variable. Related to GRDP per capita of 50 provinces, provincial GRDP, CPI, urbanization
rate, and population.
Pivot Table of Dataset
This Pivot Table provides an overview of the socio-economic situation of some provinces in Vietnam,
focusing on the following important indicators: name of the province (How Labels), average GRDP per
capita (in million VND/person/year), total GRDP of the province (in billion VND), Consumer Price Index
(Sum of CPT), and urbanization rate of the province (Sum of urbanization rate). The "Sum of Average
GRDP per capita" index indicates the average living standard of people in each province, with a higher
index meaning a higher living standard of people in that province. The "Sum of GRDP" index reflects the
scale of the province's economy, the province with a higher GRDP has a more developed economy. "Sum
of CPI" shows the rate of change in prices of a typical basket of goods and services purchased by
consumers over a period of time, while "Sum of urbanization rate" shows the proportion of the
province's urban population, the higher the proportion, the greater the level of urbanization of the
province. Through these indicators, the Pivot table provides a comprehensive picture of the living
standards, economic scale, population and urbanization level of provinces and cities, helping to clearly
identify the economic and social differentiation between regions.

Average GRDP per capita income of 51 provinces in 2022


Through the chart of GRDP per capita of 51 provinces in 2022, we can draw some important information
about the differentiation in living standards between provinces and cities in Vietnam. The bar chart
shows the huge difference in GRDP per capita between provinces. Some provinces have very high living
standards, while others have very low ones. Provinces with high living standards are often concentrated
in large urban areas, key economic regions, where there are many job opportunities and attract a lot of
investment. This chart is important in showing income inequality between regions. Provinces with high
living standards often have greater economic development potential, showing the need for support
policies to develop the economy in provinces with lower living standards to reduce this inequality.
Create/draw different types of methods for given variables (frequency tables, simple tables, pie
charts, histograms).
Frequency tables

Frequency tables GRDP average income per capita

This frequency table provides an overview of income differentiation among delta regions in Vietnam in
2022, based on GRDP per capita. There are clear differences in living standards between regions in
Vietnam. Some regions are concentrated in the high-income group (over VND 100 million/person/year),
while other regions are in the lower-income group. However, most regions have GRDP per capita in the
range of VND 40-120 million/person/year, showing a not-so-large differentiation but still enough to
notice the difference. Only a few regions have GRDP per capita above VND 180 million/person/year,
showing that very high living standards are concentrated in certain localities.
Simple tables

Hà Nội 1,196,000 Hà Tĩnh 91.91


Quảng Ninh 269.24 Quảng Bình 50
Vĩnh Phúc 153.12 Quảng Trị 40.82
Bắc Ninh 243.03 Thừa Thiên Huế 66.35
Hải Dương 169.18 Đà Nẵng 125.22
Hải Phòng 365.59 Quảng Nam 116.37
Hưng Yên 132 Quảng Ngãi 121.34
Thái Bình 110.72 Bình Định 106.35
Hà Nam 76.4 Phú Yên 50.5
Nam Định 91.96 Khánh Hòa 96.44
Ninh Bình 81.78 Ninh Thuận 46.49
Hà Giang 30.57 Bình Thuận 97.13
Cao Bằng 21.64 Kon Tum 30.41
Bắc Kạn 15.01 Gia Lai 107.05
Tuyên Quang 41.71 Đắk Lắk 108.18
Lào Cai 67.96 Đắk Nông 39.94
Điện Biên 25.24 Lâm Đồng 103.5
Lai Châu 23.38 Đồng Nai 434.99
Sơn La 64.51 Cần Thơ 107.7
Yên Bái 40.21 Bình Dương 459.04
Hòa Bình 56.64 Vũng Tàu 390.29
Thái Nguyên 142.95 Long An 156.36
Lạng Sơn 41.49 Hậu Giang 48.06
Bắc Giang 155.88 TP HCM 1,479,227
Phú Thọ 89.4
Thanh Hóa 252.67
Nghệ An 175.58
Provincial GRDP (bilion VND)

The table shows the gross regional domestic product (GRDP) of 51 provinces and cities in Vietnam in
2022. It reflects the size and strength of the economy in each locality. This table is of great significance in
understanding the economic differentiation between provinces and cities. Some provinces have very
high GRDP, while others have relatively low GRDP, showing the difference in economic scale between
localities. Provinces with high GRDP often have strong economic development, attract a lot of
investment and have great development potential. On the contrary, provinces with low GRDP need
support policies to develop the economy. Although the table only provides total GRDP, it also partly
reflects the economic structure of each province. Provinces with high GRDP often have diversification in
economic sectors.

Pie charts

GRDP average income per capita of 51 provinces


This pie chart provides a visual way to compare per capita income levels across provinces in a given year.
Each pie slice represents a province, and the size of the slice is proportional to the percentage of that
province's population whose per capita income falls within the corresponding range.
Histograms

Urbanization rate of 51 provinces


The chart shows a clear differentiation in the level of urbanization among provinces. Some provinces
have very high urbanization rates, while others have very low rates, reflecting the difference in the level
of socio-economic development among regions. Provinces with high urbanization rates are often
concentrated in key economic regions such as the Red River Delta and the Mekong River Delta.
Meanwhile, mountainous and midland provinces often have lower urbanization rates. The orange curve
in the chart shows the general trend of urbanization. This curve tends to increase, showing that
urbanization in Vietnam is taking place rapidly.
Explain the advantages and disadvantages of different types of methods for given variables.
Frequency tables
In the field of data analysis, choosing the right method of data presentation is very important. Each
method has its advantages and disadvantages, suitable for each type of data and specific purposes. This
essay will analyze in detail the frequency table, simple table, pie chart and histogram, along with specific
situations when to use each method.
A frequency table is a clear and intuitive tool for comparing the frequency of different values in a data
set. With a frequency table, analysts can easily identify the values that appear most and least. This is very
useful when wanting to identify trends or outstanding features of the data. For example, in a survey of
consumer age, a frequency table can help determine which age group has the largest proportion.
Frequency tables also provide the basis for other analyses, such as calculating descriptive statistics such
as mean, median, and standard deviation. Knowing the frequency of each value, analysts can easily
calculate these statistical indicators to get an overview of the data. In addition, frequency tables are
suitable for many types of data, including quantitative (continuous or discrete) and qualitative data,
making them a flexible and powerful tool in data analysis. However, frequency tables also have some
limitations. When there are many different classes or values, the frequency table can become very long
and difficult to visualize the overall distribution of the data. This is especially true when the data has
many variables or large variations in values. For example, in a survey with hundreds of different values,
the frequency table will become very cumbersome and difficult to read.
Simple tables
Simple tables are often used when data needs to be presented in a neat and easy-to-understand way.
They are suitable for small tables with a small number of variables and are not too complex. Simple
tables allow for direct comparison of values between variables, making it easy for users to see the
relationships and differences between variables. For example, in a business report, simple tables can be
used to compare sales between quarters of the year or between different products. Although simple
tables are an effective tool, they are less intuitive than other types of charts. This can make it difficult to
see trends and distributions in the data. For example, if the data has many variables or has a large
variation, simple tables will become difficult to read and will no longer be effective. Furthermore, simple
tables are not suitable for large and complex data, because they can easily become confusing and lose
clarity. As the number of variables increases, a simple table becomes difficult to read and understand,
reducing the effectiveness of data presentation.
Pie chart
A pie chart is a visual and easy-to-understand tool for showing the percentage of each part of a whole.
With a pie chart, viewers can quickly and intuitively see the proportion of each part. This makes pie
charts suitable for comparing parts of a whole and communicating information clearly. For example, in a
report on market share of companies in an industry, a pie chart can clearly and easily display the
percentage of each company.
However, pie charts also have limitations. When there are too many categories, the chart becomes
cluttered and difficult to read.
Histogram
A histogram is a visual tool for displaying the shape of the distribution of numerical data. It allows the
analyst to easily notice important features such as skewness, symmetry, and peaks in the data. This is
especially useful when exploring the shape of the distribution of continuous data and detecting hidden
trends. For example, in a study of the height of a group of people, a histogram can clearly show the
shape of the distribution of height and help identify features such as skewness and peaks. However, the
histogram is also affected by the choice of class interval. Choosing different class intervals can change
the shape of the histogram, which can lead to misleading analysis. For example, if the class interval is too
wide, the histogram may not show enough detail in the data. Conversely, if the class interval is too
narrow, the histogram may become too detailed and difficult to understand. Furthermore, histograms
make it difficult to compare values directly, only frequencies within class ranges can be compared. This
can reduce the ability to compare and analyze data.
Choosing the right method of data presentation is very important to ensure that information is
communicated clearly and effectively. Each method has its advantages and disadvantages, so analysts
need to carefully consider based on the characteristics of the data and the purpose of analysis to choose
the most suitable method. Frequency tables should be used when it is necessary to analyze the
frequency of each value in detail and as a basis for statistical calculations.
Simple tables are suitable when you need to directly compare the values of two or more variables neatly
and clearly. For example, in a business report, a simple table might compare sales between different
months or products.
Pie charts are suitable for showing the percentage of each part of the whole in a visual and easy-to-
understand way. For example, in a report on the market share of companies, a pie chart can clearly show
the proportion of each company. Finally, a histogram is the best choice when you want to explore the
distribution shape of numerical data, especially continuous data. For example, in a study on height, a
histogram can clearly show the distribution shape of the data. Choosing the right method of data
presentation is very important to ensure that information is communicated clearly and effectively. Each
method has its advantages and disadvantages, so analysts need to carefully consider based on the
characteristics of the data and the purpose of the analysis to choose the most suitable method.
Justify the rationale for choosing the method of communication
In the business and economics fields, there are many methods to present data effectively and
understandably. One of the most popular methods is Frequency Distribution, which shows the frequency
of occurrence of values in a data set. This method helps to determine which values appear most
frequently, thereby providing a clear view of the distribution of data. In business, it can be used to
analyze the popularity of different products or services, thereby making appropriate strategic decisions.
In addition, Relative Frequency Distribution and Percent Frequency Distribution allow us to consider the
relative frequency and percentage of values compared to the total data. Relative Frequency Distribution
allows us to consider the relative frequency of values compared to the total data. Instead of just showing
the number of occurrences, this method represents the frequency as a percentage of the total data. This
is useful when comparing the prevalence of values in a large data set, helping businesses better
understand the structure and characteristics of the market. Percent Frequency Distribution is a variation
of frequency distribution, in which the frequency of values is shown as a percentage of the total. This
method helps to represent the prevalence of values more understandably and comparably. In
economics, it can be used to analyze the percentages of different consumer groups or the allocation of
resources in a business. Bar Charts and Pie Charts are powerful data visualization tools, with bar charts
using bars to represent the frequency or value of different items. This method is often used to analyze
sales, profits by product or service, or any data that needs to be compared between different groups in a
business.
While Pie Charts divide data into sections of a circle, each section represents a percentage of the whole.
This method is especially useful when we want to show the percentage of components in a whole. In
business, Pie Charts are often used to show revenue structure, expense ratios, or market allocation.
Other methods such as Dot Plots and Histograms help to show the distribution of data in detail, while
Cumulative Distributions allow us to observe the accumulation of values. Scatter diagrams and Trend
Lines are useful tools to analyze the relationship between variables, while a Stacked Bar Chart helps to
compare elements in a whole visually. All of these methods provide different perspectives and help us to
analyze and understand the data better. Frequency Distribution is a useful tool to show the frequency of
occurrence of values in a data set. This method helps to identify which values appear most frequently,
thereby providing a clear view of the distribution of data. In business, it can be used to analyze the
popularity of different products or services, thereby making appropriate strategic decisions.
To better understand the factors affecting market fluctuations and business performance, we need to
apply analytical methods to Vinfast's stock data report.
VinFast Stock Data
To analyze and present the fluctuations of Vinfast stocks, using line charts is an effective choice. Line
charts help us clearly observe trends and changes in data values over specific time periods.

Vinfast stock price fluctuation from March 1 to May 12 2024


One of the most effective ways to present stock price data over time is the line chart. The highlight of the
Line Chart is the ability to track the trend of the data continuously, helping users easily identify changes
and long-term trends. This is especially important for stock price data, where daily fluctuations can
provide important information for investors and analysts.
Another advantage of the Line Chart is the ability to display multiple variables such as the opening price
(Open), the highest price of the day (High), the lowest price of the day (Low), and the closing price
(Close). This gives users a comprehensive view of the stock price movement on each trading day.
Comparing these variables on the same chart helps to detect days with large fluctuations or clear up and
down trends.
The Line Chart is also an intuitive and easy-to-understand tool. Users can quickly grasp information from
the chart without having to perform complex analysis. This is useful in reporting and presenting analysis
results to stakeholders, including those without a deep financial background.
Furthermore, Line Charts support investment decision-making by providing visual information about
stock price trends. For example, identifying uptrends can prompt a decision to buy a stock, while
downtrends can lead to a decision to sell. The presence of variables such as the highest and lowest prices
also helps assess the volatility and risk associated with the investment.
In short, Line Charts are an ideal way to present daily stock price data. Not only does it help to track and
analyze trends, but it also effectively supports investment decision-making.
To better understand the factors affecting market fluctuations and business performance, below is a
table of sales data of car manufacturers in the US market in June and the first half of 2023. This table
provides detailed information on total sales, market share, and growth rate compared to the same
period last year of each car manufacturer. This information helps us have an overview of the car market
in the US, thereby drawing out trends and evaluating the business performance of car manufacturers
during this period.
Car sales of manufacturers in the US market in June and the first half of 2023.

Using a scatter plot to present data on vehicle sales by automaker in the US market in June and the first
half of 2023 will help highlight the relationship between variables and provide insight into the business
performance of each automaker.
Scatter Plot Car sales of manufacturers in the US market in June and the first half of 2023.

Using a scatter plot to present the vehicle sales data of manufacturers in the US market in June and the
first half of 2023 is a suitable method. First of all, the scatter plot allows to display the relationship
between sales and growth rates of automakers. Each point on the chart represents a automaker, with
the X-axis representing sales and the Y-axis representing YoY growth rates. This makes it easy to see the
relationship between sales and growth rates of each manufacturer, providing insight into business
performance.
The scatter plot also helps to detect data clusters, identify groups of automakers with similar sales and
growth rates, thereby analyzing competition between manufacturers. At the same time, this method
helps to identify outliers, such as automakers with unusual business results, allowing for deeper analysis
of the causes leading to these results.
Furthermore, scatter charts can visualize multiple variables using different symbols and colors, such as
the size of the points representing market share and the colors representing different product groups.
The simplicity and effectiveness of scatter charts help readers quickly grasp important information
without having to analyze a lot of complex data.
Thus, using scatter charts to present vehicle sales data of manufacturers in the US market in June and
the first half of 2023 is an intuitive and effective method, helping to highlight the relationship between
variables and provide an overview of the business performance of each car manufacturer.
In summary, in the field of business and economics, choosing the right method of presenting and
reporting data is very important to convey information clearly and effectively. Methods such as data
tables, bar charts, line charts, pie charts, scatter charts, box charts and heat maps all have their own
advantages, suitable for different types of data and research purposes. Using these methods not only
helps readers easily grasp important information but also supports strategic decision making and
business analysis. Presenting data in a scientific and intuitive way will provide an overview and insight,
thereby optimizing management efficiency and business development.
Conclusion
To sum up, this research has shown how different statistical techniques can be used to improve the
company's information system and decision-making procedures. The corporation's business planning and
operational management can benefit from the insightful information obtained from the analysis of the
dataset provided.
The company can gain a deeper understanding of the dynamics of its quality control and operations by
looking at measurements of variability. The investigation of probability distributions—especially the
normal distribution—has brought to light how crucial it is to take statistical principles into account when
deciding on company operations and procedures. The company now has a way to make sense of the
data and pinpoint important correlations between variables thanks to the use of inferential statistics like
regression analysis and t-tests.
This information can be used to improve capacity planning, inventory control, and other crucial business
operations. Furthermore, the significance of data visualization in helping with decision-making has been
emphasized by the skillful communication of the analysis findings through the use of suitable charts and
tables. With this improved understanding of the various measurement levels, the company can now
select the best techniques for delivering complex information to stakeholders. After reading this report, I
think the company should keep spending money on improving its information system and incorporating
statistical analysis into its decision-making procedures. With this strategy, the company will be able to
make data-driven, better-informed decisions that will increase business planning, operational
effectiveness, and, eventually, competitiveness in the market
Referent list
Team, E. (2024) Process variation in Lean Six Sigma. everything to know, SixSigma.us. Available at:
https://www.6sigma.us/process-improvement/process-variation-lean-six-
sigma/#:~:text=Let’s%20dive%20in!-
,Defining%20Process%20Variation,output%20of%20a%20business%20process. [Accessed: 25 July 2024].

Weedmark, D. (2021) Importance of variation in Total Quality Management, Small Business - Chron.com.
Available at: https://smallbusiness.chron.com/importance-variation-total-quality-management-
52234.html [Accessed: 25 July 2024].

VietnamBiz, (2019) Phuong Sai (variance) la gi? Cong thuc tinh phuong sai (online) Available at:
https://vietnambiz.vn/phuong-sai-variance-la-gi-cong-thuc-tinh-phuong-sai-20191102151828036.htm
[Accessed: 27 June 2024].

VietnamBiz (2019) Do lech chuan (standard deviation) la gi? cong thuc tinh do lech chuan,
vietnambiz. (online) Available at: https://vietnambiz.vn/do-lech-chuan-standard-deviation-la-gi-
cong-thuc-tinh-do- lech-chuan-2019110216112891.htm (Accessed: 27 June 2024).

Hayes, A. (2024) Probability distribution: Definition, types, and uses in investing, Investopedia. Available
at: https://www.investopedia.com/terms/p/probabilitydistribution.asp [Accessed: 25 July 2024].

Britannica, (2024) Normal distribution statistics. (online) Available at: Normal distribution | Definition,
Examples, Graph, & Facts | Britannica [Accessed: 26 July 2024].

Turney, S. (2023). Poisson Distributions | Definition, Formula & Examples. Scribbr.Available at:
https://www.scribbr.com/statistics/poisson-distribution/ [Accessed: 26 July 2024].

Jim Frost, (n.d) Nominal Data: Definition & Examples (online). Available at: Dữ liệu danh nghĩa: Định
nghĩa &; Ví dụ - Thống kê của Jim (statisticsbyjim.com) [Accessed: 26 July 2024].
Hillier, W. (2021) What is interval data? [definition, analysis & examples], CareerFoundry. Available at:
https://careerfoundry.com/en/blog/data-analytics/what-is-interval-data/#what-is-interval-data-a-
definition [Accessed: 26 July 2024].

Bhandari, P. (2023) Ordinal Data: Definition, examples, Data Collection & Analysis, Scribbr. Available at:
https://www.scribbr.com/statistics/ordinal-data/ [Accessed: 01 August 2024].

GeeksforGeeks, (2024) Regression in machine learning (online) Available at:


https://www.geeksforgeeks.org/regression-in-machine-learning/ [Accessed: 26 July 2024].

Bhandari, P. (2020) Interval data and how to analyze it: Definitions & examples, Scribbr. Available at:
https://www.scribbr.com/statistics/interval-data/ [Accessed: 26 July 2024].

Cuemath, (n.d) Binomial distribution - definition, properties, calculation, formula, examples, application
(online) Available at: https://www.cuemath.com/algebra/binomial-distribution/ [Accessed: 26 July
2024].

Stattrek(n.d) Estimation in statistics (online) Available at: https://stattrek.com/estimation/estimation-in-


statistics [Accessed: 27 July 2024].

Bevans, R. (2019) Hypothesis testing: A step-by-step guide with easy examples, Scribbr. Available at:
https://www.scribbr.com/statistics/hypothesis-testing/ [Accessed: 27 July 2024].

Hayes,A. (2024) Probability Distribution: Definition, Types, and Uses in Investing (online) Available at:
https://www.investopedia.com/terms/p/probabilitydistribution.asp#:~:text=A%20probability%20distrib
ution%20depicts%20the,deviation%2C%20skewness%2C%20and%20kurtosis. [Accessed: 28 July 2024].
Hayes,A. (2024) Multiple Linear Regression (MLR) Definition, Formula, and Example (online) Available at:
https://www.investopedia.com/terms/m/mlr.asp#:~:text=A%20multiple%20regression%20considers%20
the,variables%20in%20the%20model%20constant [Accessed: 28 July 2024].

Hillier, W. (2023) What is ratio data? definition, characteristics and examples, CareerFoundry. Available
at: https://careerfoundry.com/en/blog/data-analytics/what-is-ratio-
data/#:~:text=Ratio%20data%20is%20a%20form,it%20has%20a%20’true%20zero. [Accessed: 01 August
2024].

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy