0% found this document useful (0 votes)
7 views25 pages

Forecasting

information

Uploaded by

vishal.khalane
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views25 pages

Forecasting

information

Uploaded by

vishal.khalane
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

Forecasting

Abas Khan, Senior Resident, hospital adminstration, SKIMS


Mohd Sarwar Mir,Resident Medical Officer,SKIMS
Ruksana Hamid,Medical officer and anesthesiologist, JK health
Rayees Ul Hamid Wani, Senior Resident, Emergency medicine, SKIMS

Corresponding author
Ruksana Hamid, Medical officer and Anesthesiologist, JK health

ABSTRACT
Forecasting is the process of making predictions based on past and present data
and most commonly by analysis of trends. A commonplace example might
be estimation of some variable of interest at some specified future
date. Prediction is a similar, but more general term. Both might refer to formal
statistical methods employing time series, cross-sectional or longitudinal data, or
alternatively to less formal judgmental methods.

.
Forecasting is the process of making predictions based on past and present data and most commonly by analysis of
trends. A commonplace example might be estimation of some variable of interest at some specified future
date. Prediction is a similar, but more general term. Both might refer to formal statistical methods employing time
series, cross-sectional or longitudinal data, or alternatively to less formal judgmental methods. Usage can differ
between areas of application: for example, in hydrology the terms "forecast" and "forecasting" are sometimes
reserved for estimates of values at certain specific future times, while the term "prediction" is used for more
general estimates, such as the number of times floods will occur over a long period.
Risk and uncertainty are central to forecasting and prediction; it is generally considered good practice to indicate
the degree of uncertainty attaching to forecasts. In any case, the data must be up to date in order for the forecast to
be as accurate as possible. In some cases the data used to predict the variable of interest is itself forecast.
Forecasting software packages have become available in recent years; the subject has deserved scholarly reviews.

Qualitative vs. quantitative methods


Qualitative forecasting techniques are subjective, based on the opinion and judgment of consumers and experts;
they are appropriate when past data are not available. They are usually applied to intermediate- or long-range
decisions. Examples of qualitative forecasting methods are informed opinion and judgment, the Delphi
method, market research, and historical life-cycle analogy.
Quantitative forecasting models are used to forecast future data as a function of past data. They are appropriate to
use when past numerical data is available and when it is reasonable to assume that some of the patterns in the data
are expected to continue into the future. These methods are usually applied to short- or intermediate-range
decisions. Examples of quantitative forecasting methods are last period demand, simple and weighted N-
Period moving averages, simple exponential smoothing, poisson process model based forecasting and
multiplicative seasonal indexes. Previous research shows that different methods may lead to different level of
forecasting accuracy. For example, GMDH neural network was found to have better forecasting performance than
the classical forecasting algorithms such as Single Exponential Smooth, Double Exponential Smooth, ARIMA and
back-propagation neural network.
Average approach
In this approach, the predictions of all future values are equal to the mean of the past data. This approach can be
used with any sort of data where past data is available.
Although the time series notation has been used here, the average approach can also be used for cross-sectional
data (when we are predicting unobserved values; values that are not included in the data set). Then, the prediction
for unobserved values is the average of the observed values.

Naïve approach
Naïve forecasts are the most cost-effective forecasting model, and provide a benchmark against which more
sophisticated models can be compared. This forecasting method is only suitable for time series data. Using the
naïve approach, forecasts are produced that are equal to the last observed value. This method works quite well for
economic and financial time series, which often have patterns that are difficult to reliably and accurately predict. If
the time series is believed to have seasonality, the seasonal naïve approach may be more appropriate where the
forecasts are equal to the value from last season.
Drift method
A variation on the naïve method is to allow the forecasts to increase or decrease over time, where the amount of
change over time (called the drift) is set to be the average change seen in the historical data.
Seasonal naïve approach
The seasonal naïve method accounts for seasonality by setting each prediction to be equal to the last
observed value of the same season. For example, the prediction value for all subsequent months of April
will be equal to the previous value observed for April.

Time series methods


Time series methods use historical data as the basis of estimating future outcomes. They are based on
the assumption that past demand history is a good indicator of future demand.

 Moving average
 Weighted moving average
 Exponential smoothing
 Autoregressive moving average (ARMA) (forecasts depend on past values of the variable being
forecast and on past prediction errors)
 Autoregressive integrated moving average (ARIMA) (ARMA on the period-to-period change in
the forecast variable)
e.g. Box–Jenkins
Seasonal ARIMA or SARIMA or ARIMARCH,

 Extrapolation
 Linear prediction
 Trend estimation (predicting the variable as a linear or polynomial function of time)
 Growth curve (statistics)
 Recurrent neural network
Relational methods
Some forecasting methods try to identify the underlying factors that might influence the variable that is being
forecast. For example, including information about climate patterns might improve the ability of a model to predict
umbrella sales. Forecasting models often take account of regular seasonal variations. In addition to climate, such
variations can also be due to holidays and customs: for example, one might predict that sales of college football
apparel will be higher during the football season than during the off season.
Several informal methods used in causal forecasting do not rely solely on the output of mathematical algorithms,
but instead use the judgment of the forecaster. Some forecasts take account of past relationships between variables:
if one variable has, for example, been approximately linearly related to another for a long period of time, it may be
appropriate to extrapolate such a relationship into the future, without necessarily understanding the reasons for the
relationship.
Causal methods include:

 Regression analysis includes a large group of methods for predicting future values of a
variable using information about other variables. These methods include
both parametric (linear or non-linear) and non-parametric techniques.
 Autoregressive moving average with exogenous inputs (ARMAX)
Quantitative forecasting models are often judged against each other by comparing their in-sample or out-of-
sample mean square error, although some researchers have advised against this. Different forecasting approaches
have different levels of accuracy. For example, it was found in one context that GMDH has higher forecasting
accuracy than traditional ARIMA
Judgmental methods
Judgmental forecasting methods incorporate intuitive judgement, opinions and subjective probability estimates.
Judgmental forecasting is used in cases where there is lack of historical data or during completely new and unique
market conditions.
Judgmental methods include:

Composite forecasts[citation needed]


[citation needed]
 Cooke's method
 Delphi method
 Forecast by analogy
 Scenario building
 Statistical surveys
 Technology forecasting
Artificial intelligence methods
 Artificial neural networks
 Group method of data handling
 Support vector machines
Often these are done today by specialized programs loosely labeled

Data mining

 Machine learning
 Pattern recognition
Geometric Extrapolation with error prediction
Can be created with 3 points of a sequence and the "moment" or "index", this type of extrapolation have 100 %
accuracy in predictions in a big percentage of known series database (OEIS).
Other methods

 Granger causality
 Simulation
 Prediction market
 Probabilistic forecasting and Ensemble forecasting

Forecasting accuracy
The forecast error (also known as a residual) is the difference between the actual value and the forecast value for
the corresponding period:
where E is the forecast error at period t, Y is the actual value at period t, and F is the forecast for period t.
A good forecasting method will yield residuals that are uncorrelated. If there are correlations between residual
values, then there is information left in the residuals which should be used in computing forecasts. This can be
accomplished by computing the expected value of a residual as a function of the known past residuals, and
adjusting the forecast by the amount by which this expected value differs from zero.
A good forecasting method will also have zero mean. If the residuals have a mean other than zero, then the
forecasts are biased and can be improved by adjusting the forecasting technique by an additive constant that equals
the mean of the unadjusted residuals.

Scaled-dependent errors
The forecast error, E, is on the same scale as the data, as such, these accuracy measures are scale-dependent and
cannot be used to make comparisons between series on different scales.
Mean absolute error (MAE) or mean absolute deviation (MAD):
Mean squared error (MSE) or mean squared prediction error (MSPE):
Root mean squared error (RMSE):
Average of Errors (E):
Percentage errors
These are more frequently used to compare forecast performance between different data sets because they are
scale-independent. However, they have the disadvantage of being extremely large or undefined if Y is close to or
equal to zero.
Mean absolute percentage error (MAPE):
Mean absolute percentage deviation (MAPD):
Scaled errors
Hyndman and Koehler (2006) proposed using scaled errors as an alternative to percentage errors.
Mean absolute scaled error (MASE):
where m=seasonal period or 1 if non-seasonal
Other measures
Forecast skill (SS):
Business forecasters and practitioners sometimes use different terminology. They refer to the PMAD as the MAPE,
although they compute this as a volume weighted MAPE. For more information see Calculating demand forecast
accuracy.
When comparing the accuracy of different forecasting methods on a specific data set, the measures of aggregate
error are compared with each other and the method that yields the lowest error is preferred.
Training and test sets
When evaluating the quality of forecasts, it is invalid to look at how well a model fits the historical data; the
accuracy of forecasts can only be determined by considering how well a model performs on new data that were not
used when fitting the model. When choosing models, it is common to use a portion of the available data for fitting,
and use the rest of the data for testing the model, as was done in the above examples.
Cross-validation
Cross-validation is a more sophisticated version of training a test set.
For cross-sectional data, one approach to cross-validation works as follows:

1. Select observation i for the test set, and use the remaining observations in the
training set. Compute the error on the test observation.
2. Repeat the above step for i = 1,2,..., N where N is the total number of observations.
3. Compute the forecast accuracy measures based on the errors obtained.
This makes efficient use of the available data, as only one observation is omitted at each step
For time series data, the training set can only include observations prior to the test set. Therefore, no future
observations can be used in constructing the forecast. Suppose k observations are needed to produce a reliable
forecast; then the process works as follows:
1. Starting with i=1, select the observation k + i for the test set, and use the
observations at times 1, 2, ..., k+i–1 to estimate the forecasting model. Compute the
error on the forecast for k+i.
2. Repeat the above step for i = 2,...,T–k where T is the total number of observations.
3. Compute the forecast accuracy over all errors.
This procedure is sometimes known as a "rolling forecasting origin" because the "origin" (k+i -1) at which the
forecast is based rolls forward in time. Further, two-step-ahead or in general p-step-ahead forecasts can be
computed by first forecasting the value immediately after the training set, then using this value with the training set
values to forecast two periods ahead, etc.

Seasonality
Seasonality is a characteristic of a time series in which the data experiences regular and predictable changes which
recur every calendar year. Any predictable change or pattern in a time series that recurs or repeats over a one-year
period can be said to be seasonal. It is common in many situations – such as grocery store or even in a Medical
Examiner's office—that the demand depends on the day of the week. In such situations, the forecasting procedure
calculates the seasonal index of the “season” – seven seasons, one for each day – which is the ratio of the average
demand of that season (which is calculated by Moving Average or Exponential Smoothing using historical data
corresponding only to that season) to the average demand across all seasons. An index higher than 1 indicates that
demand is higher than average; an index less than 1 indicates that the demand is less than the average.
Cyclic behaviour
The cyclic behaviour of data takes place when there are regular fluctuations in the data which usually last for an
interval of at least two years, and when the length of the current cycle cannot be predetermined. Cyclic behavior is
not to be confused with seasonal behavior. Seasonal fluctuations follow a consistent pattern each year so the period
is always known. As an example, during the Christmas period, inventories of stores tend to increase in order to
prepare for Christmas shoppers. As an example of cyclic behaviour, the population of a particular natural
ecosystem will exhibit cyclic behaviour when the population decreases as its natural food source decreases, and
once the population is low, the food source will recover and the population will start to increase again. Cyclic data
cannot be accounted for using ordinary seasonal adjustment since it is not of fixed period.

Applications
Forecasting has applications in a wide range of fields where estimates of future conditions are useful. Not
everything can be forecast reliably, if the factors that relate to what is being forecast are known and well
understood and there is a significant amount of data that can be used very reliable forecasts can often be obtained.
If this is not the case or if the actual outcome is affected by the forecasts, the reliability of the forecasts can be
significantly lower.
Climate change and increasing energy prices have led to the use of Egain Forecasting for buildings. This attempts
to reduce the energy needed to heat the building, thus reducing the emission of greenhouse gases. Forecasting is
used in customer demand planning in everyday business for manufacturing and distribution companies.
While the veracity of predictions for actual stock returns are disputed through reference to the Efficient-market
hypothesis, forecasting of broad economic trends is common. Such analysis is provided by both non-profit groups
as well as by for-profit private institutions.
Forecasting foreign exchange movements is typically achieved through a combination of chart and fundamental
analysis. An essential difference between chart analysis and fundamental economic analysis is that chartists study
only the price action of a market, whereas fundamentalists attempt to look to the reasons behind the action.
Financial institutions assimilate the evidence provided by their fundamental and chartist researchers into one note
to provide a final projection on the currency in question.
Forecasting has also been used to predict the development of conflict situations. Forecasters perform research that
uses empirical results to gauge the effectiveness of certain forecasting models. However research has shown that
there is little difference between the accuracy of the forecasts of experts knowledgeable in the conflict situation and
those by individuals who knew much less.
Similarly, experts in some studies argue that role thinking does not contribute to the accuracy of the forecast. The
discipline of demand planning, also sometimes referred to as supply chain forecasting, embraces both statistical
forecasting and a consensus process. An important, albeit often ignored aspect of forecasting, is the relationship it
holds with planning. Forecasting can be described as predicting what the future will look like, whereas planning
predicts what the future should look like. There is no single right forecasting method to use. Selection of a method
should be based on your objectives and your conditions (data etc.). A good place to find a method, is by visiting a
selection tree. An example of a selection tree can be found here. Forecasting has application in many situations:

Supply chain management - Forecasting can be used in supply chain management to ensure that the right product is
at the right place at the right time. Accurate forecasting will help retailers reduce excess inventory and thus
increase profit margin. Studies have shown that extrapolations are the least accurate, while company earnings
forecasts are the most reliable. Accurate forecasting will also help them meet consumer demand.

 Customer demand planning


 Economic forecasting
 Earthquake prediction
 Egain forecasting
 Energy forecasting for renewable power integration
 Finance against risk of default via credit ratings and credit scores
 Land use forecasting
 Player and team performance in sports
 Political forecasting
 Product forecasting
 Sales forecasting
 Technology forecasting
 Telecommunications forecasting
 Transport planning and Transportation forecasting
 Weather forecasting, Flood forecasting and Meteorology

Limitations
Limitations pose barriers beyond which forecasting methods cannot reliably predict. There are many events and
values that cannot be forecast reliably. Events such as the roll of a die or the results of the lottery cannot be forecast
because they are random events and there is no significant relationship in the data. When the factors that lead to
what is being forecast are not known or well understood such as in stock and foreign exchange markets forecasts
are often inaccurate or wrong as there is not enough data about everything that affects these markets for the
forecasts to be reliable, in addition the outcomes of the forecasts of these markets change the behavior of those
involved in the market further reducing forecast accuracy.
The concept of "self-destructing predictions" concerns the way in which some predictions can undermine
themselves by influencing social behavior. This is because "predictors are part of the social context about which
they are trying to make a prediction and may influence that context in the process". For example, a forecast that a
large percentage of a population will become HIV infected based on existing trends may cause more people to
avoid risky behavior and thus reduce the HIV infection rate, invalidating the forecast (which might have remained
correct if it had not been publicly known). Or, a prediction that cybersecurity will become a major issue may cause
organizations to implement more security cybersecurity measures, thus limiting the issue.
Performance limits of fluid dynamics equations
As proposed by Edward Lorenz in 1963, long range weather forecasts, those made at a range of two weeks or
more, are impossible to definitively predict the state of the atmosphere, owing to the chaotic nature of the fluid
dynamics equations involved. Extremely small errors in the initial input, such as temperatures and winds, within
numerical models double every five days.[31]

 Scenario planning
 Spending wave
 Strategic foresight
 Technology forecasting
 Thucydides Trap
 Time series
 Weather forecasting
 Wind power forecasting

References

^ French, Jordan (2017). "The time traveller's CAPM". Investment Analysts Journal. 46 (2): 81–
96. doi:10.1080/10293523.2016.1255469. S2CID 157962452.

^ Sanders, Nada R.; Manrodt, Karl B. (2003-10-01). "Forecasting Software in Practice: Use, Satisfaction, and
Performance". INFORMS Journal on Applied Analytics. 33 (5): 90–
93. doi:10.1287/inte.33.5.90.19251. ISSN 2644-0865.

Fildes, Robert; Schaer, Oliver; Svetunkov, Ivan; Yusupova, Alisa (2020-08-04). "Survey: What's new in
forecasting software?". Operations Research Management Science Today. 47 (4). ISSN 1085-1038.
Quality by Design (QbD) and Design of Experiments (DOE) are both methodologies used in the field of
process and product development, but they differ in their approaches and objectives.

Quality by Design (QbD):

QbD is a systematic approach to product and process development that focuses on designing quality into
the product from the beginning. It involves identifying critical quality attributes (CQAs) of the product,
understanding the factors that affect those attributes, and designing a manufacturing process that can
consistently produce a product with the desired quality. QbD aims to understand and control the sources
of variability that can affect product quality by using scientific principles, risk assessment, and statistical
tools.
The key features of QbD include:

1. Product and process understanding: QbD emphasizes a thorough understanding of the product and the process, including the
identification of critical process parameters (CPPs) and critical material attributes (CMAs) that can impact product quality.
2. Risk assessment and mitigation: QbD involves a proactive assessment and management of risks throughout the product lifecycle. It
includes identifying potential risks to product quality, developing strategies to mitigate those risks, and implementing control measures
to ensure consistent quality.
3. Design space and control strategy: QbD involves establishing a design space, which is a multidimensional combination and interaction of
input variables (e.g., process parameters) that have been demonstrated to provide assurance of quality. A control strategy is then
developed to maintain the process within the design space.

Design of Experiments (DOE):

On the other hand, DOE is a statistical technique used to systematically investigate and analyze the
relationship between process variables (factors) and the output (response) of a process. DOE involves
designing a series of experiments where different factors are deliberately varied, while keeping other
factors constant, to understand their individual and combined effects on the process output. The goal of
DOE is to optimize process performance, improve product quality, and understand the factors that have
the most significant impact.

Key aspects of DOE include:

1. Factorial designs: DOE often uses factorial designs where multiple factors and their interactions are studied simultaneously. By
systematically varying the levels of factors, the experiment allows for the estimation of main effects and interaction effects.
2. Response surface methodology (RSM): RSM is frequently employed in DOE to model and optimize the relationship between factors and
response. It helps identify the optimal factor settings that result in the desired response.
3. Statistical analysis: DOE employs statistical analysis techniques to analyze the experimental data and draw conclusions about the
significance of factors and their effects. This allows for the identification of critical factors that have the most significant impact on the
process or product.

In summary, while QbD is a holistic approach to product and process development that focuses on
understanding and controlling critical quality attributes, DOE is a statistical technique that helps in the
systematic exploration and optimization of process variables. DOE is often used as a tool within the QbD
framework to support the identification and optimization of critical factors that influence product quality.
Is there a Step by Step approach for a successful QbD Development?

Developing Quality by Design (QbD) involves a systematic and step-by-step approach to ensure the quality
of pharmaceutical products. Here is a general outline of the process:

1. Define the Target Product Profile (TPP):

 Identify the desired characteristics and quality attributes of the final product.
 Consider factors such as dosage form, route of administration, strength, stability, and patient requirements.

2. Identify Critical Quality Attributes (CQAs):

 Determine the specific product attributes that significantly impact safety, efficacy, or quality.
 Use scientific knowledge, regulatory guidelines, and patient needs to identify CQAs.

Examples of CQAs include potency, purity, stability, dissolution rate, and particle size distribution.

3. Identify Critical Material Attributes (CMAs):

 Assess the characteristics of raw materials, excipients, and components used in the manufacturing process.
 Determine the material attributes that can impact the quality of the final product.

Examples of CMAs include particle size, polymorphic form, moisture content, and chemical composition.

4. Identify Critical Process Parameters (CPPs):

 Determine the key variables that need to be controlled within specific limits to ensure product quality.
 Use scientific understanding, prior knowledge, and experimentation to identify CPPs.

Examples of CPPs include temperature, pressure, mixing speed, drying time, and sterilization conditions.
5. Establish a Design Space:

 Conduct experiments and studies to identify the acceptable ranges and interactions of CPPs.
 Use statistical analysis and modeling techniques to establish a multidimensional design space.
 The design space represents the range of process parameters that provide assurance of quality.

6. Develop a Control Strategy:

 Define a control strategy to ensure the manufacturing process operates within the established design space and meets quality
standards.
 Include process controls, in-process testing, monitoring, and corrective actions in the control strategy.
 The control strategy should address potential risks and deviations from the design space.

7. Conduct Risk Assessment and Mitigation:

 Perform a systematic risk assessment to identify potential risks to product quality.


 Use tools such as Failure Mode and Effects Analysis (FMEA) and Hazard Analysis and Critical Control Points (HACCP).
 Develop risk mitigation strategies to minimize or eliminate identified risks.

8. Implement Continuous Improvement and Knowledge Management:

 Continuously monitor and analyze data throughout the product lifecycle.


 Use the knowledge gained to improve processes, optimize controls, and update the design space if necessary.
 Incorporate feedback and lessons learned to enhance product quality and process understanding.

It's important to note that the specific steps and details of QbD may vary depending on the product,
process, and regulatory requirements. QbD is an iterative and ongoing process that requires collaboration
among various stakeholders, including scientists, engineers, regulatory experts, and quality assurance
professionals.
During which phase of drug development is it most appropriate to implement Quality by
Design (QbD)?

Quality by Design (QbD) principles can be implemented at various phases of drug development, but it is
most effective when incorporated from the early stages. The ideal phase to begin implementing QbD is
during the development of the manufacturing process and formulation. Here are the key stages where
QbD can be applied:

Preclinical Development:

 During preclinical development, QbD principles can be applied to the formulation development and process design.
 The identification of CQAs and CMAs can begin based on the desired characteristics of the product.
 Initial risk assessments can be conducted to understand potential risks and challenges associated with the product and process.
Source: Pharmaceutical QbD: Omnipresence in the product development lifecycle, Refer: 6

Formulation Development:

 QbD can be employed during formulation development to understand the impact of formulation factors on product attributes.
 Design of Experiments (DOE) techniques can be used to systematically vary formulation components and optimize their effects on CQAs.
 The design space can be explored to identify the acceptable ranges of formulation variables that achieve the desired product quality.

Process Development:

 QbD is highly applicable during process development to understand the impact of process parameters on product quality.
 CPPs can be identified using scientific knowledge, prior experience, and experimentation.
 DOE can be employed to study the effects of process parameters on CQAs and optimize the process design.

Technology Transfer and Scale-Up:

 QbD principles play a crucial role in technology transfer and scale-up activities.
 The knowledge gained from the earlier stages can be used to ensure a successful transfer of the optimized process to a larger scale.
 Risk assessments can be performed to identify potential challenges and develop mitigation strategies during the scale-up process.

Process Validation:

 QbD principles guide the development of a robust control strategy for process validation.
 The established design space and control strategy are used to set acceptance criteria and ensure consistent product quality during
validation studies.

Commercial Manufacturing:

 QbD principles continue to be applied during commercial manufacturing to maintain process control and monitor product quality.
 Ongoing monitoring, data analysis, and continuous improvement are employed to optimize the manufacturing process and update the
design space if necessary.

Implementing QbD early in the drug development process enables a proactive and systematic approach to
ensuring quality. It helps identify critical factors, optimize processes, and minimize variability, ultimately
leading to more consistent and reliable pharmaceutical products.

Is QbD necessary for drug development?


While Quality by Design (QbD) is not mandatory for drug development in a regulatory sense, it is
highly recommended and considered good practice by regulatory agencies and industry experts. QbD
provides a systematic and science-based approach to ensure the quality of pharmaceutical products
throughout their lifecycle. Implementing QbD principles offers several benefits:

 Enhanced Understanding: QbD promotes a deeper understanding of the product and its critical quality attributes (CQAs), manufacturing
processes, and associated risks. This knowledge enables better control and mitigation of potential issues.
 Consistent Product Quality: By designing quality into the product and process from the beginning, QbD helps ensure consistency and
reliability in product quality. It minimizes variability and reduces the likelihood of batch failures or deviations.
 Risk Reduction: QbD incorporates risk assessment and mitigation strategies, helping identify and address potential risks early in the
development process. This proactive approach minimizes the chance of quality issues and improves patient safety.
 Process Optimization: QbD encourages the optimization of manufacturing processes by identifying critical process parameters (CPPs)
and establishing a design space. This leads to improved efficiency, reduced waste, and cost savings.
 Regulatory Compliance: QbD aligns with regulatory expectations and guidelines, such as those provided by the International Council for
Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH). Demonstrating a thorough understanding of
product quality and employing a robust control strategy helps meet regulatory requirements.
 Continuous Improvement: QbD emphasizes continuous improvement and knowledge management throughout the product lifecycle.
Ongoing monitoring, data analysis, and feedback loops allow for process refinement and optimization.

While QbD may require additional upfront effort and resources, its implementation can result in long-term
benefits, including improved product quality, reduced risks, and increased efficiency. It promotes a
proactive and science-based approach to drug development, leading to safer and more reliable
pharmaceutical products.

Are there any specific tools recommended for the development of Quality by Design (QbD)?

Several tools and methodologies can be employed during Quality by Design (QbD) development to
facilitate data analysis, experimentation, risk assessment, and process optimization. Here are some
commonly used tools:
 Design of Experiments (DOE): DOE is a statistical tool used to systematically vary process parameters and formulation components to
understand their impact on product quality. It helps identify critical process parameters (CPPs) and establish the relationship between
variables and critical quality attributes (CQAs).

Source: Pharmaceutical QbD: Omnipresence in the product development lifecycle, Refer: 6

 Failure Mode and Effects Analysis (FMEA): FMEA is a systematic approach used to identify potential failure modes and their effects on
product quality. It assesses the severity, occurrence, and detectability of failures and helps prioritize risks for mitigation.
 Fishbone Diagram (Ishikawa Diagram): A fishbone diagram is a visual tool used to identify and categorize potential causes of problems
or quality deviations. It helps identify the root causes of issues and facilitates problem-solving.
 Control Charts: Control charts are statistical tools used to monitor process performance and detect variations or trends over time. They
help assess process stability and identify potential out-of-control situations that may affect product quality.
 Risk Assessment Matrices: Risk assessment matrices provide a visual representation of identified risks, their likelihood, and severity.
They help prioritize risks based on their potential impact on product quality and guide risk mitigation strategies.
 Statistical Analysis Software: Various statistical analysis software packages, such as Minitab, JMP, or R, can be used for data analysis,
design of experiments, and statistical modeling. These tools facilitate data visualization, regression analysis, and optimization of process
parameters.
 Quality Tools (e.g., Pareto Analysis, Cause-and-Effect Diagram): Quality tools help identify and analyze the causes of quality issues.
Pareto analysis and cause-and-effect diagrams (also known as fishbone or Ishikawa diagrams) are commonly used to identify the most
significant factors contributing to a problem.
 Process Analytical Technology (PAT): PAT involves the use of real-time process monitoring and control tools, such as spectroscopy,
chromatography, or near-infrared (NIR) analysis. PAT enables continuous monitoring of critical process parameters and facilitates real-
time adjustments to maintain product quality.

It's important to note that the selection of tools depends on the specific requirements and context of the
QbD development. The choice of tools should be based on the complexity of the process, available data,
and the expertise of the team. Integrating multiple tools and approaches can provide a comprehensive and
effective framework for QbD implementation.

Thanks to Manasa J, for providing expert inputs during the draft, as she comes with rich experience in
developing and implementing QbD and DoEs for various pharmaceutical processes.

References:

1. Beg, S., Hasnain, M. S., Rahman, M., & Swain, S. (2019). Introduction to quality by design (QbD):
fundamentals, principles, and applications. In Pharmaceutical quality by design (pp. 1-17). Academic Press.

2. Fukuda, I. M., Pinto, C. F. F., Moreira, C. D. S., Saviano, A. M., & Lourenço, F. R. (2018). Design of
experiments (DoE) applied to pharmaceutical and analytical quality by design (QbD). Brazilian journal of
pharmaceutical sciences, 54.

3. Mishra, V., Thakur, S., Patil, A., & Shukla, A. (2018). Quality by design (QbD) approaches in current
pharmaceutical set-up. Expert opinion on drug delivery, 15(8), 737-758.

4. ICH Guidelines: The ICH guideline Q8 (R2) describes the QbD process specifically for drug product; ICH
Q11 guides the QbD development of the active substance.
5. Ter Horst, J. P., Turimella, S. L., Metsers, F., & Zwiers, A. (2021). Implementation of Quality by Design
(QbD) principles in regulatory dossiers of medicinal products in the European Union (EU) between 2014
and 2019. Therapeutic innovation & regulatory science, 55, 583-590.

6. Pharmaceutical QbD: Omnipresence in the product development lifecycle,


https://www.europeanpharmaceuticalreview.com/article/77392/pharmaceutical-qbd-omnipresence-in-the-
product-development-lifecycle/

Because an inaccurate forecast can trigger the bullwhip effect and ruin companies...

This infographics covers everything, including how to calculate the bullwhip effect:

✅ Concept
Small changes in the demand at the customer level amplifies along the supply chain. Each participant orders more than
the previous one.
👉 For example: a customer orders 100 more units (small change) and amplifies to 400 (4X more) along the supply chain.

✅ Effects
👉 Increased inventory levels
👉 Excess inventory
👉 Lower operating profits
👉 Negative impact on cash flow
👉 Potential higher interest cost and lower net income

✅ Triggers
👉 Changes in price; sales promotions
👉 Inaccurate demand forecasts
👉 Long lead times
👉 Bulk orders to obtain volume discounts
👉 Change to new suppliers
👉 Change to new systems
👉 Events and social media
👉 New product introductions
👉 Natural disasters, pandemics, etc

✅ Actions to take
👉 Review sales forecast history
👉 Determine seasonality and business cycles
👉 Consider product life cycle
👉 Identify events
👉 Define demand drivers (weather for ice cream or soup, for example)
👉 Calculate bullwhip effect and use it for safety stock levels
👉 Critical: improve communication along the supply chain

✅ Data for Calculation


👉 SKU
👉 Sales in units
👉 Shipments in units

✅ Step by Step Calculation


1️⃣Calculate average sales in units per SKU.
2️⃣Calculate average shipments in units per SKU.
3️⃣Calculate standard deviation of sales in units per SKU (this function is in Excel).
4️⃣Calculate covariance of sales = standard deviation/ average sales in units by SKU
5️⃣Calculate standard deviation of shipments in units per SKU (this function is in Excel).
6️⃣Calculate covariance of shipments = standard deviation/ average shipment in units by SKU
7️⃣Calculate bullwhip effect = (Covariance shipped – Covariance sales)/Covariance sales

✅ Numeric Example Calculation


1️⃣Average sales in units = 49,057
2️⃣Average shipment in units = 52,914
3️⃣Standard deviation sales in units = 14,972
4️⃣Covariance of sales = 14,972/ 49,057 = 0.30519
5️⃣Standard deviation of shipments = 17,653
6️⃣Covariance of shipments = 17,653/ 52,914 = 0.33361
7️⃣Bullwhip Effect = (0.33361- 0.30519)/ 0.30519 = 0.09312

Has your company experienced the bullwhip effect?

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy