0% found this document useful (0 votes)
105 views5 pages

Monte Carlo Methods and Bayesian Computation: Importance Sampling

This article discusses importance sampling (IS), a Monte Carlo method for numerically evaluating expectations of functions of random variables where no analytical solution exists. IS works by generating random draws from an importance function rather than the original distribution and weighting them based on the ratio between the two distributions. The key aspects are selecting an appropriate class of importance functions and then choosing the most efficient one that closely resembles the original distribution.

Uploaded by

tadele10
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
105 views5 pages

Monte Carlo Methods and Bayesian Computation: Importance Sampling

This article discusses importance sampling (IS), a Monte Carlo method for numerically evaluating expectations of functions of random variables where no analytical solution exists. IS works by generating random draws from an importance function rather than the original distribution and weighting them based on the ratio between the two distributions. The key aspects are selecting an appropriate class of importance functions and then choosing the most efficient one that closely resembles the original distribution.

Uploaded by

tadele10
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Monte Carlo Methods and Bayesian Computation: Importance Sampling

Roman Liesenfeld, Eherhard Karls University, Tubingen, Germany


Jean-Francois Richard, The University of Pittsburgh, Pittsburgh, PA, USA
 2015 Elsevier Ltd. All rights reserved.
This article is reproduced from the previous edition, volume 15, pp. 10000–10004,  2001, Elsevier Ltd., with an updated Bibliography section
supplied by the Editor.

Abstract

Monte Carlo simulation methods can be used to numerically evaluate expectations of functions of random variables (e.g.,
posterior moments of parameters of interest) for which no analytical expressions are available. They consist of generating
random draws from the relevant distribution and replacing expectations by arithmetic means across such draws. Under
appropriate technical conditions, the (statistical) accuracy of Monte Carlo estimates is inversely proportional to the number
of draws. In cases where a sampling density is not available, which produces sufficiently accurate estimates, one will try to
construct an auxiliary sampling density, which is called an importance function. The ratio between the integrand and the
selected importance function defines the corresponding remainder function. The initial expectation is then estimated by the
arithmetic mean of realizations of the remainder function under random draws from the importance function. An efficient
importance function is one which closely resembles the initial integrand under an appropriate metric and, thereby, produces
accurate Monte Carlo estimates under as few draws as possible. The article discusses several methods for constructing efficient
importance functions at a sufficient level of generality.

General Principle described, e.g., in Stroud and Secrest (1966) or in Golub


and Welsch (1969). However, higher-dimensional applica-
The term importance sampling (IS) designates a method tions require the use of MC simulation methods. The two
designed to improve the numerical performance (or efficiency) most widely used methods are Monte Carlo Markov chain
of Monte Carlo (MC) simulation methods for the evaluation of (MCMC) methods (see Markov Chain Monte Carlo Methods)
analytically intractable integrals, generally expectations of and IS which is presented here.
functions of random variables. Early descriptions of the IS requires the selection of a class of samplers, indexed by an
method can be found, e.g., in Kahn and Marshall (1953), auxiliary vector of parameters a ˛ A, say M ¼ {m(xja);a ˛ A}.
Trotter and Tukey (1956), or in the commonly cited mono- While in some applications M might consist of a single sampler,
graph of Hammersley and Handscomb (1964, Section 5.4). it is generally advisable to select a class of samplers and then,
Examples of applications are the numerical evaluation of within that class, a specific sampler under an appropriate ’effi-
Bayesian posterior expectations of functions of parameters of ciency’ criterion. Selection of M and of a sampler within M are
interest – see, e.g., Kloek and van Dijk (1978) or Geweke discussed further below, once the purpose of IS has been clar-
(1989) – and that of a likelihood function in the presence of ified. In the case of formula [2], it will often be the case that M
unobserved latent variables – see, e.g., Durbin and Koopman consists of suitable parametric extensions of m and, therefore,
(1997) or Richard and Zhang (2000). that m itself belongs to M, i.e., that there exists a0 ˛ A such that
The general principle of IS can be outlined as follows. m(x)hm(xja0) on D. While, as discussed further below, such
Consider a situation where one has to numerically evaluate an a condition may be instrumental in the construction of M, it is
integral of the form by no means a requirement of the method.
Z For any given a ˛ A, the integral in formula [1] can be
I ¼ fðxÞdx [1] rewritten as
D
Z    
fðxÞ fðxÞ
where f denotes a real function defined on D3Rp. I ¼ mðxjaÞdx ¼ Em [3]
mðxjaÞ mðxjaÞ
A case of special interest is that where f takes the form of the D
product of a density function m with support D and a function h
whose expectation has to be evaluated on m, say In the case of formula [2], this expectation is often
Z rewritten as
I ¼ Em ½hðxÞ ¼ hðxÞmðxÞdx [2]
I ¼ Em ½hðxÞuðx; aÞ [4]
D

In the subsequent analysis, m will be referred to as an ‘initial’ where


sampler, to be distinguished from the importance sampler mðxÞ
introduced below. uðx; aÞ ¼ [5]
mðxjaÞ
In low-dimensional applications (say p  5), one might
consider using nonstochastic multivariate quadrature rules, as denotes the weight function associated with the sampler m.

758 International Encyclopedia of the Social & Behavioral Sciences, 2nd edition, Volume 15 http://dx.doi.org/10.1016/B978-0-08-097086-8.42148-3
Monte Carlo Methods and Bayesian Computation: Importance Sampling 759

For any given a ˛ A, an MC IS estimate of I is then given by kernel) and one selects for M a class of Normal samplers.
 Nonexistence of s2(a) may be particularly difficult to detect
X S
fðxÞ 
bI s ðaÞ ¼ 1 [6] when it originates from such low-probability events. Indica-
S i ¼ 1 mðxjaÞx¼~xi ðaÞ tions of a potential problem are the empirical findings that the
MC sampling variance of bI S ðaÞ does not decrease inversely
where f~xi ðaÞ; i : 1/Sg denotes a set of S identically inde- proportionally to S or, relatedly, in the context of formula [4],
pendently distributed (hereafter i.i.d.) draws from the auxiliary that a few draws, often less than a handful, carry excessive
sampler m(xja). If m(xja) is such that the sampling variance relative weights. It is, therefore, absolutely critical to pay very
Z  2 close attention to estimates of the MC sampling variance of
fðxÞ bI S ðaÞ and/or to the behavior of the weight function in eqn [4],
s2 ðaÞ ¼ I mðxjaÞdx [7]
mðxjaÞ at least in those cases where the existence of s2(a) cannot be
D
formally assessed.
is finite, then a central
pffiffiffi limit theorem applies, whereby the
random variable SðbI S ðaÞ  IÞ converges in distribution
towards a Normal with zero mean and variance s2(a). Let b s 2S ðaÞ Construction of Importance Samplers
denote the MC finite sample variance of the ratio f=m in
formula [6]. A numerical estimate of the variance of the esti- Thus the two critical issues in the design of a successful IS
mator bI S ðaÞ is then given by ð1=SÞb s 2S ðaÞ. Note, therefore, that implementation are: (a) the selection of an appropriate class M
once a sampler has been selected, the numerical accuracy of the of auxiliary samplers; and (b) the selection within that class of
corresponding IS estimator of I, as measured by its MC stan- an ’efficient’ sampler, i.e., one for which the MC sampling
dard deviation is of the order S1/2. Poor selection of a may variance s2(a) is as small as possible. A number of truly
require a prohibitively large MC sample size in order to achieve innovative proposals to that effect are found in the recent
sufficient accuracy. Conditions for the finiteness of s2(a) are econometric and statistical literature, though there remains
discussed, e.g., in Geweke (1996) or Stern (1997). A sufficient a significant lack of generic principles for the construction of
condition is that jf=mj be bounded above on D. Convergence efficient importance samplers (one recent exception to this
in the absence of such a condition may be notoriously more general statement is presented on its own in Section 3 below).
difficult to verify. As discussed further in the concluding remarks, this lack of
If there exists a ˛ A such that fðxÞ ¼ mðxja Þ on D, then generic principles is perceived by some as a significant
s2(a ) ¼ 0 and bI S ða Þ ¼ I; cS  1. More generally, one impediment to routine use of IS. In the following some of the
would aim at selecting a value b a on A which minimizes the MC most useful proposals which have been offered in the late
sampling variance s2(a). Since, however, s2(a) does not have an twentieth-century literature are briefly presented.
analytical expression and can only be numerically approximated, While Tierney and Kadane (1986) do not discuss IS per se,
the actual choice of b a will often be based upon more heuristic they propose an extension of the Laplace method of integra-
considerations – some of which are discussed further below – tion, as discussed, e.g., in De Bruijn (1958), to construct
aiming at constructing importance samplers which ’approxi- approximations for Bayesian posterior moments and marginal
mate’ fðxÞ on D. It should be noted that the term importance densities. The very same idea can be used to construct an
sampling emphasizes the fact that m(xja) is precisely meant to importance sampler to approximate numerically the exact
oversample in the ’important’ parts of the domain D, i.e., those integrals. See Richard and Zhang (1998) for an application of
parts which contribute most to the value of integral I. this principle to a logit model with unobserved heterogeneity.
It is, however, useful to draw users’ attention to a potentially Since, however, the Laplace method essentially serves to
misleading interpretation of this notion of region of impor- construct a local approximation around the mode of fðxÞ , it
tance. In practice, it is generally the case that large, possibly remains important to pay special attention to tail behavior in
infinite, values of s2(a) will not be generated by values of x in order to control for s2(a), as discussed above.
the region of importance (where most importance samplers are Geweke (1989) pays attention to the minimization of the
expected to perform well by design), but instead by values of x MC sampling variance s2(a) within specific families of IS
which can be very far away in the tails of m(xja) and which are, densities, typically multivariate Student-t densities and skewed
therefore, most unlikely to be ever drawn in a finite sample MC generalizations thereof, labeled split-t densities. A problem
simulation. This situation can be particularly pernicious when with his approach, which is characteristic of any attempt at
D is unbounded. Earlier applications of IS to the computation minimizing s2(a), is that the latter depends upon integrals
of Bayesian posterior moments did occasionally suffer from which have themselves to be numerically evaluated (a problem
such pathologies, especially in situations where non- which will be addressed in Section 3 below). In the same paper
informative prior distributions were used in combination with Geweke also discusses more heuristic procedures for tailoring
a (qualitatively) poorly identified model, see Statistical Iden- m(xja) in ways which proved quite successful in some Bayesian
tification and Estimability. (Note that in such cases, truncation applications. It should also be noted that Geweke’s article
of the support D of the posterior density may not be an includes numerous references relative to MC simulation and
acceptable solution, insofar as the moments of interest might numerical integration.
be highly sensitive to such a truncation.) A classical example of Evans (1991) develops adaptive methods which rely upon
nonexistence of s2(a), which has been widely cited in the early earlier draws of x to identify large values of the weight function
Bayesian literature on IS, is that where f–1 has polynomial tails defined in formula [5] and accordingly to modify the impor-
(e.g., if m in formula [2] takes the form of a Student-t density tance sampler. While such adaptive methods can occasionally
760 Monte Carlo Methods and Bayesian Computation: Importance Sampling

prove useful, they do suffer from the problem explained earlier, technique to compute posterior moments for a given prior.
whereby values of x for which the weight function u(x; a) is Then, in order to assess robustness relative to the prior, they
large are typically highly unlikely to be drawn. propose to rerun the analysis under different priors using the
Owen and Zhou (2000) discuss improvements of initial MCMC sampler as importance sampler.
IS technique which are shown to work quite well in An important message which emerges from this discussion
low-dimensional applications. One proposal consists of is that efficient applications of IS have to be carefully tailored to
combining IS with the method of ’control variates’ to produce the problem under consideration. This has proved to be
an upper bound for the MC sampling variance. (The method a significant obstacle to routine applications of IS and is
of control variates uses knowledge of approximate integrals to especially true for Bayesian applications which can produce
I to reduce the MC sampling variance of bI S ðaÞ. See notoriously ’ill-behaved’ and, therefore, difficult to approxi-
Hammersley and Handscomb (1964) or Hendry (1984) for mate posterior densities. Actually, Bayesian statisticians have
detailed descriptions of the control variate method.) Another largely abandoned IS in favor of MCMC. It should be noted,
extension consists of pooling several IS estimators in order to however, that while MCMC can be more flexible than IS, its
reduce MC sampling variances. convergence is not easily assessed in practice and may require
Durbin and Koopman (1997) apply IS to evaluate the large numbers of MC draws. In fact, both methods can occa-
likelihood function of non-Gaussian state space models. Their sionally produce spectacular failures if not carefully designed
application is particularly interesting in that it consists of and controlled.
a fairly sophisticated and, in a sense, indirect application of IS
in dimensions which are significantly larger than those of the
Efficient Importance Sampling
applications discussed above. Let p(yjl,q) denote a non-
Gaussian density for a vector y of observables, conditional
Expanding upon an earlier contribution by Danielsson and
on a vector l of unobserved variables and a vector q of
Richard (1993), Richard and Zhang (1998) have recently
unknown parameters. Let v(ljq) denote the density of l
proposed a generic principle for constructing efficient impor-
given q. Finally, let g(yjl,q) denote a Gaussian approximation
tance samplers in cases where f is a positive function. For ease
to p(yjl,q). Under g one can use standard Kalman filter tech-
of presentation, the method will be introduced here without
niques, as discussed, e.g., in Harvey (1990), to compute
reference to dimensionality. A straightforward rearrangement
a likelihood function Lg(q) and, in parallel, a density vg(ljy,q)
of terms in formula [7] produces the following expression:
for the latent variables. It is then trivial to show that the actual Z
likelihood function.     
s2 ðaÞ ¼ h d2 x; a f x dx [10]
Z
Lp ðqÞ ¼ pðyjl; qÞvðljqÞdl [8]
where
 
can be rewritten as fðxÞ
dðx; aÞ ¼ ln [11]
Z mðxjaÞI
pðyjl; qÞ
Lp ðqÞ ¼ Lg ðqÞ vg ðljy; qÞdl [9]
gðyjl; qÞ pffi pffi
hðcÞ ¼ e c
þ e c
2 [12]
The advantage of formula [9] over formula [8] is that it
relies upon numerical integration to evaluate departures from Note that h is monotone convex on Rþ and that h(c)  c.
the Gaussian likelihood Lg, rather than the likelihood itself. The Minimization of s2(a) with respect to a requires nonlinear
density vg in formula [9] plays exactly the role of an importance optimization. Consider, therefore, the simpler function
Z
sampler. Durbin and Koopman (1997) then discuss the  
qðaÞ ¼ ½ln fðxÞ  c  ln mðxjaÞ2 f x dx [13]
construction of the approximating Gaussian model in order to
secure maximal numerical accuracy. The interest of that appli-
where c stands for an intercept in place of the unknown ln I.
cation is to show that the selection of an importance sampler
Next, one selects an initial sampler, say m(xja0) – it could be
can be approached indirectly via the construction of an oper-
m(x) in formula [2], or any other sampler based, for example on
ational approximation to a complex model.
a local approximation of fðxÞ of the sort used by Tierney and
Another sophisticated implementation of IS is found in
Kadane (1986). An MC approximation to q(a) is then given by
Madras and Piccioni (1999) where the authors propose to
provide greater flexibility in the choice of the class of samplers 1X R  
M, often a sticky problem in applications of IS, by allowing b
q R ðaÞ ¼ ½ln fðxÞ  c  ln mðxjaÞ2 u0 x  0 [14]
R i¼1 x¼~xi
m(xja) to be the (implicit) equilibrium distribution associated
with a Monte Carlo Markov chain simulator (see Monte Carlo where
Methods and Bayesian Computation: Overview; Markov Chain
fðxÞ
Monte Carlo Methods). The main advantage of this extension u0 ðxÞ ¼ [15]
mðxja0 Þ
lies in the fact, discussed elsewhere, that MCMC simulators are
very flexible. Its potential drawback is that its convergence and f~x0i ; i : 1/Rg denotes a set of R i.i.d. draws from m(xja0).
properties are typically more difficult to assess than those of If, furthermore, m(xja) belongs to the exponential family of
a conventional importance sampler. distributions, then minimization of b q R ðaÞ with respect to
An approach which also combines IS and MCMC is used by a takes the form of a simple weighted linear least squares
DiMatteo and Kadane (2001). They employ a standard MCMC problem. It may be advisable to omit the weight function u0(x)
Monte Carlo Methods and Bayesian Computation: Importance Sampling 761

if it exhibits excessive variance in cases where m(xja0) is found critical and cannot be secured with independent pointwise MC
to provide a poor global approximation to fðxÞ. The method estimation, however accurate.
can be iterated by using as initial sampler in any given round In such cases, it becomes essential that all IS sequences
the optimized sampler from the previous round. Experience f~xi ðaðdÞÞ; i : 1/Sg be obtained by transformation of
suggests that no more than 3 to 4 iterations are required to a common sequence of random draws {ui; i: 1 / S} from
produce a stable solution, irrespective of the initial sampler a distribution which does not depend on a, say
m(xja0). In order to assess how much efficiency is lost by ~xi ðaÞ ¼ Hðui ; aÞ; i : 1/S; ca ˛ A [18]
minimizing q(a) instead of s2(a), let b a and a denote the
minimizers of q and s2, respectively. Taking advantage of the The ui’s, the so-called common random numbers, would
convexity of h, Jensen’s inequality implies that typically be uniforms (in which case H denotes the inverse of
          the distribution function of x given a) or standardized normals
s2 b
a  s2 a  h q a  h q b a [16]
(when x given a is itself normally distributed). See, e.g., Devroye
MC estimates of the two bounds are immediate byproducts (1986) for detailed descriptions of such transformations. Note,
of the minimization of b q R ðaÞ. Unless they were found to be far in particular, that inversion of a distribution function can be
apart, there would be no reasons for solving the more complex a computer-intensive operation but, nevertheless, is indispens-
minimization problem associated with s2(a). able for smoothness. (The use of carefully designed interpola-
The method is extremely accurate and fast in all cases where tion techniques can significantly reduce computing times.) It
the integrand f can be well approximated by a sampler from should be noted that smoothness is not only required for
the exponential family of distributions. The real strength of the numerical convergence of an optimization algorithm but also
method lies in very high-dimensional (1000þ) integration of plays a central role in the derivation of the asymptotic properties
’well-behaved’ integrands. High-dimensional minimization of of a broad range of simulation estimators. See, e.g., Pakes and
b
q R ðaÞ does require an additional factorization which is not Pollard (1989) or Gouriéroux and Monfort (1996).
discussed here. The final algorithm is one whereby minimiza-
tion is recursively carried out one dimension at a time. It has
proved useful for likelihood evaluation in dynamic latent
variable models such as the one discussed earlier in Durbin and
Conclusion
Koopman (1997). Liesenfeld and Richard (2000) successfully
IS is one of the key techniques currently available for the
apply Efficient Importance Sampling (EIS) to estimate para-
evaluation of analytically intractable integrals by MC simula-
metric as well as semi-nonparametric dynamic stochastic
tion. It performs at its best when used in combination with
volatility models for three daily financial series (IBM stock
’acceleration’ techniques such as, e.g., control variates,
prices, S&P 500 stock index, and Dollar/Deutsche Mark
mixtures, common random numbers. Very high-dimensional
exchange rate). Dimensions of integration are of the order of
(1000þ) integration is feasible as long as the integrand is
4700. In each case highly accurate estimates of the likelihood
reasonably well-behaved. Further developments are required to
function are obtained with as few as R ¼ S ¼ 50 MC draws and
guide the design of efficient IS samplers. IS is not an all-purpose
each individual likelihood evaluation requires on the order of
technique and lack of flexibility in the construction of samplers
8 s on a Pentium II 466 MHz notebook.
may complicate its application to ill-behaved integrands, for
Such outstanding performance originates from the fact that
which alternative methods of integration such as MCMC might
these high-dimensional integrands turn out to be well-
prove more useful. Nevertheless, IS has proved to be
behaved, reflecting the fact that the data happen to be quite
a remarkably useful technique for a wide range of applications
informative on the unobserved volatilities. A pilot application
including very high-dimensional integrations.
to a logit panel model with random effects along both
dimensions had proved equally successful (see Richard and
Zhang, 1998 for details).
Bibliography

Smooth Functional Integration Bernardo, J.M., Bayarri, M.J., Berger, J.O., Dawid, A.P., Heckerman, D.,
Smith, A.F.M., West, M., 2003. The variational Bayesian Em algorithm for
incomplete data: with application to scoring graphical model structures. Bayesian
In numerous applications, such as likelihood function evalu- Statistics 7, 453–464.
ation, the integral to be evaluated is itself a function of some Bratley, P., Fox, B.L., Schrage, L.E., 1987. A Guide to Simulation, second edn.
argument d, say Springer Verlag, New York.
Z Danielsson, J., Richard, J.F., 1993. Accelerated Gaussian importance sampler with
applications to dynamic latent variable models. Journal of Applied Econometrics 8,
IðdÞ ¼ fðx; dÞdx; d ˛ D [17]
s153–s173.
DðdÞ De Bruijn, N.G., 1958. Asymptotic Methods in Analysis. North Holland, Amsterdam.
Devroye, L., 1986. Non-Uniform Random Variate Generation. Springer Verlag, New York.
The argument d is then carried along in all the formulae DiMatteo, I., Kadane, J., 2001. Vote tampering in a district Justice election in Beaver
previously discussed. Most importantly, one cannot expect county, Pennsylvania. Journal of the American Statistical Association 96,
a single sampler to be efficient for all d’s. Hence, IS requires the 510–518.
Durbin, J., Koopman, S.J., 1997. Monte Carlo maximum likelihood estimation for non-
selection of a particular a, say a(d), for each relevant value of d. Gaussian state space models. Biometrika 84, 669–684.
If, furthermore, I(d) has to be optimized with respect to d, then Evans, M., 1991. Adaptive importance sampling and chaining. Contemporary Math-
smoothness of the functional approximation bI S ðdÞ becomes ematics (Statistical Multiple Integration) 115, 137–142.
762 Monte Carlo Methods and Bayesian Computation: Importance Sampling

Geweke, J., 1989. Bayesian inference in econometric models using Monte Carlo Neal, Radford M., 2001. Annealed importance sampling. Statistics and Computing 11
integration. Econometrica 57, 1317–1339. (2), 125–139.
Geweke, J., 1996. Monte Carlo simulation and numerical integration. In: Amman, H., Owen, A., Zhou, Y., 2000. Safe and effective importance sampling. Journal of the
Kendrick, D., Rust, J. (Eds.), The Handbook of Computational Economics, vol. 1. American Statistical Association 95, 135–143.
North Holland, Amsterdam, pp. 731–800. Pakes, A., Pollard, D., 1989. Simulation and the asymptotics of optimization esti-
Golub, G.H., Welsch, J.H., 1969. Calculation of Gaussian quadrature rules. Mathe- mators. Econometrica 57, 1027–1057.
matics of Computation 23, 221–230. Raftery, Adrian E., Bao, Le, 2010. Estimating and projecting trends in HIV/AIDS
Gouriéroux, C., Monfort, A., 1996. Simulation Based Econometric Methods. In: CORE generalized epidemics using incremental mixture importance sampling. Biometrics
Lecture series. Oxford University Press, Oxford, UK. 66 (4), 1162–1173.
Hammersley, J.M., Handscomb, D.C., 1964. Monte Carlo Methods. Methuen, London. Richard, J.-F., Zhang, W., 1998. Efficient High-Dimensional Monte Carlo Importance
Harvey, A.C., 1990. Forecasting, Structural Time Series Models and the Kalman Filter. Sampling. Mimeo, University of Pittsburgh, PA.
Cambridge University Press, Cambridge, UK. Richard, J.-F., Zhang, W., 2000. Accelerated Monte Carlo integration: an application to
Hendry, D.F., 1984. Monte Carlo experimentation in econometrics. In: Griliches, Z., dynamic latent variables models. In: Mariano, R., Schuermann, T., Weeks, M.
Intriligator, M.D. (Eds.), The Handbook of Econometrics. North Holland, (Eds.), Simulation-Based Inference in Econometrics: Methods and Applications.
Amsterdam. Cambridge University Press, Cambridge, UK.
Hesterberg, T., 1995. Weighted average importance sampling and defensive mixture Stern, S., 1997. Simulation-based estimation. Journal of Economic Literature 35,
distributions. Technometrics 37, 185–194. 2006–2039.
Kahn, H., Marshall, A., 1953. Methods of reducing sample size in Monte Carlo Stroud, A.H., Secrest, D.H., 1966. Gaussian Quadrature Formulae. Prentice-Hall,
computations. Journal of the Operations Research Society of America 1, 263–278. Englewood Cliffs, NJ.
Kloek, T., van Dijk, H.K., 1978. Bayesian estimates of equation system parameters: Tierney, L., Kadane, J.B., 1986. Accurate approximations for posterior moments and
application of integration by Monte Carlo. Econometrica 46, 1–19. marginal densities. Journal of American Statistical Association 81, 82–86.
Liesenfeld, R., Richard, J.-F., 2000. Univariate and multivariate stochastic volatility Trotter, H.F., Tukey, J.W., 1956. Conditional Monte Carlo for normal samples. In:
models: estimation and diagnostics. Mimeo, Eberhard-Karls-Universität, Tübingen, Mayer, H.A. (Ed.), Symposium on Monte Carlo Methods. Wiley, New York.
Germany. Yuan, Changhe, Druzdzel, Marek J., 2006. Importance sampling algorithms for
Madras, N., Piccioni, M., 1999. Importance sampling for families of distributions. The Bayesian networks: principles and performance. Mathematical and Computer
Annals of Applied Probability 9, 1202–1225. Modelling 43 (9), 1189–1207.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy