0% found this document useful (0 votes)
255 views6 pages

Distance Weighted Moving Averages

The document discusses distance weighted moving averages (DWMA) and inverse distance weighted moving averages (IDWMA) and how they can be used to filter outliers from time series data. It also presents a method to combine acceleration and volatility into a non-linear filter (NLV) that dynamically weights data points based on the magnitude of acceleration to create a hybrid measure of risk. Applying different moving average techniques to S&P 500 and gold return data over time, it finds the DWMA works best for the noisy S&P 500 returns while the IDWMA works best for highlighting large trends in gold returns. The NLV technique incorporates acceleration into volatility to potentially create a better standalone risk indicator.

Uploaded by

superbuddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
255 views6 pages

Distance Weighted Moving Averages

The document discusses distance weighted moving averages (DWMA) and inverse distance weighted moving averages (IDWMA) and how they can be used to filter outliers from time series data. It also presents a method to combine acceleration and volatility into a non-linear filter (NLV) that dynamically weights data points based on the magnitude of acceleration to create a hybrid measure of risk. Applying different moving average techniques to S&P 500 and gold return data over time, it finds the DWMA works best for the noisy S&P 500 returns while the IDWMA works best for highlighting large trends in gold returns. The NLV technique incorporates acceleration into volatility to potentially create a better standalone risk indicator.

Uploaded by

superbuddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Distance Weighted Moving Averages (DWMA

and IDWMA)
DECEMBER 18, 2014

by david varadi

The distance weighted moving average is another nonlinear


filter that provides the basis for further research and exploration. In
its traditional form, a distance weighted moving average (DWMA) is
designed to be a robust version of a moving average to reduce the
impact of outliers. Here is the calculation from the Encyclopedia of
Math:

Notice in the example above that 12 is clearly an outlier relative to


the other data points and is therefore assigned less weight in the final
average. The advantage of this approach to simple winsorization
(omitting outliers that are identified from the calculation) is that all of
the data is used and no arbitrary threshold needs to be specified. This
is especially valuable for multi-dimensional data. By squaring the
distance values in the calculation of the DWMA instead of simply
taking the absolute value, it is possible to make the average even more
insensitive to outliers. Notice that this concept can be also reversed to
emphasize outliers or simply larger data points. This can be done by
removing the need to invert the distance as a fraction and simply
using the distance weights. This can be called an inverse distance
moving average or IDWMA, and is useful in situations where you
want to ignore small moves in time series which can be considered
white noise and instead make the average more responsive to
breakouts. Furthermore, this method may prove more valuable for use
in volatility calculations where sensitivity to risk is important. The

chart below shows how these different moving averages respond to a


fictitious time series with outliers:

Notice that the DWMA is the least sensitive to the price moves and
large jumps, while the IDWMA is the most sensitive. Comparatively
the SMA response is in between both the DWMA and IDWMA. The
key is that neither moving average is superior to one another per se,
but rather each is valuable for different applications and can perform
better or worse on different time series. With that statement, lets look
at some practical examples. My preference is typically to use returns
rather than prices, so in this case we will look at applying the different
moving average variations: the DWMA,IDWMA and SMA to two
different time series- the S&P500 and Gold. Traders and investors
readily acknowledge that the S&P500 is fairly noisy- especially in the
short-term. In contrast, Gold tends to be unpredictable using longterm measurements, but large moves tend to be predictable in the
short-term. Here is the performance using a 10-day moving average
with the different variations from 1995 to present. The rules are long if
the average is above zero and cash if it is below (no interest on cash is
assumed in this case):

Consistent with anecdotal observation, the DWMA performs the best


on the S&P500 by filtering out large noisy or mean-reverting price
movements. The IDWMA in contrast performs the worst because it
distorts the average by emphasizing these moves. But the pattern is
completely different with Gold. In this case the IDWMA benefits from
highlighting these large (and apparently useful trend signals), while
the DWMA performs the worst. In both cases the SMA has middling
performance. One of the disadvantages of a distance weighted moving
average is that the calculation ignores the position in time of each data
point. An outlier is less relevant if it occurs for example over 60 days
ago versus one that occurs today. This aspect can be addressed
through clever manipulation of the calculation. However the main
takeaway is that it is possible to use different weighting schemes for a
moving average for different time series and achieve potentially
superior results. Perhaps an adaptive approach would yield good

results. Furthermore, careful thought should go into the appropriate


moving average calculation for different types of applications. For
example, you may wish to use the DWMA instead of the median to
calculate correlations- which can be badly distorted by outliers.
Perhaps using a DWMA for performance or trade statistics makes
sense as well. As mentioned earlier, using an IDWMA is helpful for
volatility-based calculations in many cases. Consider this a very
simple tool to add to your quant toolbox.

Combining Acceleration and Volatility into a


Non-Linear Filter (NLV)
DECEMBER 3, 2014

by david varadi

The last two posts presented a novel way of incorporating


acceleration as an alternative measure risk. The preliminary results
and also intuition demonstrate that it deserves consideration as
another piece of information that can be used to forecast risk. While I
posed the question as to whether acceleration was a better indicator
than volatility,the more useful question should be whether we
can combine the two into perhaps a better indicator than
either in isolation. Traditional volatility is obviously more widely

used, and is critical for solving traditional portfolio optimization.


Therefore, it is a logical choice as a baseline indicator.
Linear filters such as moving averages and regression generate output
that is a linear function of input. Non-linear filters in contrast
generate output that is non-linear with respect to input. An example of
non-linear filter would be polynomial regression, or even the humble
median. The goal of non-linear filters is to create a superior means of
weighting data to create more accurate output. By using a non-linear
filter it is possible to substantially reduce lag and increase
responsiveness. Since volatility is highly predictable, it stands to
reason that we would like to reduce lag and increase responsiveness as
much as possible to generate superior results.
So how would we create a volatility measure that incorporates
acceleration? The answer is that we need to dynamically weight each
squared deviation from the average as a function of the magnitude of
acceleration- where greater absolute acceleration should generate an
exponentially higher weighting on each data point. Here is how it is
calculated for a 10-day NLV. (Dont panic, I will post a spreadsheet in
the next post):
A) Calculate the rolling series of the square of the daily log returns
minus their average return
B) Calculate the rolling series of the absolute value of the first
difference in log returns (acceleration/error)
C) Take the current days value using B and divide it by some optional
average such as 20-days to get a relative value for acceleration
D) Raise C to the exponent of 3 or some constant of your choosingthis is the rolling series of relative acceleration constants
E) Weight each daily value in A by the current days D value divided by
the sum of D values over the last 10 daysF) Divide the calculation found in E by the sum of the weighting
constants (sum of D values divided by their sum) this is the NLV and
is analagous to the computation of a traditional weighted moving
average
And now for the punchlinehere are the results versus the different
alternative measures presented in the last post:

The concept shows promise as a hybrid measure of volatility that


incorporates acceleration. The general calculation can be applied
many different ways but this method is fairly intuitive and generic. In
the next post I will show how to make this non-linear filter even more
responsive to changes in volatility.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy