API documentation

Index

Exported Functions

TermStructureModels.ForecastType
@kwdef struct Forecast <: PosteriorSample

It contains a result of the scenario analysis, the conditional prediction for yields, factors = [PCs macros], and term premiums.

  • yields
  • factors
  • TP: term premium forecasts
source
TermStructureModels.HyperparameterType
@kwdef struct Hyperparameter
  • p::Int
  • q::Matrix
  • nu0
  • Omega0::Vector
  • mean_phi_const::Vector = zeros(length(Omega0)): It is a prior mean of a constant term in our VAR.
source
TermStructureModels.LatentSpaceType
@kwdef struct LatentSpace <: PosteriorSample

When the model goes to the JSZ latent factor space, the statistical parameters in struct Parameter are also transformed. This struct contains the transformed parameters. Specifically, the transformation is latents[t,:] = T0P_ + inv(T1X)*PCs[t,:].

In the latent factor space, the transition equation is data[t,:] = KPXF + GPXFXF*vec(data[t-1:-1:t-p,:]') + MvNormal(O,OmegaXFXF), where data = [latent macros].

  • latents::Matrix
  • kappaQ
  • kQ_infty
  • KPXF::Vector
  • GPXFXF::Matrix
  • OmegaXFXF::Matrix
source
TermStructureModels.ParameterType
@kwdef struct Parameter <: PosteriorSample

It contains statistical parameters of the model that are sampled from function posterior_sampler.

  • kappaQ
  • kQ_infty::Float64
  • phi::Matrix{Float64}
  • varFF::Vector{Float64}
  • SigmaO::Vector{Float64}
  • gamma::Vector{Float64}
source
TermStructureModels.ReducedFormType
@kwdef struct ReducedForm <: PosteriorSample

It contains statistical parameters in terms of the reduced form VAR(p) in P-dynamics. lambdaP and LambdaPF are parameters in the market prices of risks equation, and they only contain the first dQ non-zero equations.

  • kappaQ
  • kQ_infty
  • KPF
  • GPFF
  • OmegaFF::Matrix
  • SigmaO::Vector
  • lambdaP
  • LambdaPF
  • mpr::Matrix(market prices of risks, T, dP)
source
TermStructureModels.ScenarioType
@kwdef struct Scenario

It contains scenarios to be conditioned in the scenario analysis. When y = [yields; macros] is a observed vector in our measurement equation, Scenario.combinations*y = Scenario.values constitutes the scenario at a specific time. Vector{Scenario} is used to describe a time-series of scenarios.

combinations and values should be Matrix and Vector. If values is a scalar, combinations would be a matrix with one raw vector and values should be one-dimensional vector, for example [values].

  • combinations::Matrix
  • values::Vector
source
TermStructureModels.TermPremiumType
@kwdef struct TermPremium <: PosteriorSample

It contains a estimated time series of a term premium for one maturity.

  • TP::Vector: term premium estimates of a specific maturity bond. TP = timevarying_TP + const_TP + jensen holds.
  • timevarying_TP::Matrix: rows:time, cols:factors, values: contributions of factors on TP
  • const_TP::Float64: constant part in TP
  • jensen::Float64: the part due to the Jensen's inequality
source
TermStructureModels.YieldCurveType
@kwdef struct YieldCurve <: PosteriorSample

It contains a fitted yield curve. yields[t,:] = intercept + slope*latents[t,:] holds.

  • latents::Matrix: latent pricing factors in LatentSpace
  • yields
  • intercept
  • slope
source
Base.getindexMethod
getindex(x::PosteriorSample, c::Symbol)

For struct <: PosteriorSample, struct[:name] calls objects in struct.

source
Base.getindexMethod
getindex(x::Vector{<:PosteriorSample}, c::Symbol)

For struct <: PosteriorSample, struct[:name] calls objects in struct. Output[i] = $i'$th posterior sample

source
Statistics.meanMethod
mean(x::Vector{<:PosteriorSample})

Output[:variable name] returns the corresponding posterior mean.

source
Statistics.medianMethod
median(x::Vector{<:PosteriorSample})

Output[:variable name] returns the corresponding posterior median.

source
Statistics.quantileMethod
quantile(x::Vector{<:PosteriorSample}, q)

Output[:variable name] returns a quantile of the corresponding posterior distribution.

source
Statistics.stdMethod
std(x::Vector{<:PosteriorSample})

Output[:variable name] returns the corresponding posterior standard deviation.

source
Statistics.varMethod
var(x::Vector{<:PosteriorSample})

Output[:variable name] returns the corresponding posterior variance.

source
TermStructureModels.AR_res_varMethod
AR_res_var(TS::Vector, p)

It derives an MLE error variance estimate of an AR(p) model

Input

  • univariate time series TS and the lag p

output(2)

residual variance estimate, AR(p) coefficients

source
TermStructureModels.GQ_XXMethod
GQ_XX(; kappaQ)

kappaQ governs a conditional mean of the Q-dynamics of X, and its slope matrix has a restricted form. This function shows that restricted form.

Output

  • slope matrix of the Q-conditional mean of X
source
TermStructureModels.LDLMethod
LDL(X)

This function generate a matrix decomposition, called LDLt. X = L*D*L', where L is a lower triangular matrix and D is a diagonal. How to conduct it can be found at Wikipedia.

Input

  • Decomposed Object, X

Output(2)

L, D

  • Decomposed result is X = L*D*L'
source
TermStructureModels.PCAFunction
PCA(yields, p, proxies=[]; rescaling=false, dQ=[])

It derives the principal components from yields.

Input

  • yields[p+1:end, :] is used to construct the affine transformation, and then all yields[:,:] are transformed into the principal components.
  • Since signs of PCs is not identified, we use proxies to identify the signs. We flip PCs to make cor(proxies[:, i]. PCs[:,i]) > 0. If proxies is not given, we use the following proxies as a default: [yields[:, end] yields[:, end] - yields[:, 1] 2yields[:, Int(floor(size(yields, 2) / 3))] - yields[:, 1] - yields[:, end]].
  • size(proxies) = (size(yields[p+1:end, :], 1), dQ)
  • If rescaling == true, all PCs and OCs are normalized to have an average std of yields.

Output(4)

PCs, OCs, Wₚ, Wₒ, mean_PCs

  • PCs, OCs: first dQ and the remaining principal components
  • Wₚ, Wₒ: the rotation matrix for PCs and OCs, respectively
  • mean_PCs: the mean of PCs before demeaned.
  • PCs are demeaned.
source
TermStructureModels.calibrate_mean_phi_constMethod
calibrate_mean_phi_const(mean_kQ_infty, std_kQ_infty, nu0, yields, macros, tau_n, p; mean_phi_const_PCs=[], medium_tau=collect(24:3:48), iteration=1000, data_scale=1200, kappaQ_prior_pr=[], τ=[])

The purpose of the function is to calibrate a prior mean of the first dQ constant terms in our VAR. Adjust your prior setting based on the prior samples in outputs.

Input

  • mean_phi_const_PCs is your prior mean of the first dQ constants. Our default option set it as a zero vector.
  • iteration is the number of prior samples.
  • τ::scalar is a maturity for calculating the constant part in the term premium.
    • If τ is empty, the function does not sampling the prior distribution of the constant part in the term premium.

Output(2)

prior_λₚ, prior_TP

  • samples from the prior distribution of λₚ
  • prior samples of constant part in the τ-month term premium
source
TermStructureModels.conditional_forecastsMethod
conditional_forecasts(S::Vector, τ, horizon, saved_params, yields, macros, tau_n; mean_macros::Vector=[], data_scale=1200)

Input

scenarios, a result of the posterior sampler, and data

  • S[t] = conditioned scenario at time size(yields, 1)+t.
    • If we need an unconditional prediction, S = [].
    • If you are conditionaing a scenario, I assume S = Vector{Scenario}.
  • τ is a vector. The term premium of τ[i]-bond is forecasted for each i.
    • If τ is set to [], the term premium is not forecasted.
  • horizon: maximum length of the predicted path. It should not be small than length(S).
  • saved_params: the first output of function posterior_sampler.
  • mean_macros::Vector: If you demeaned macro variables, you can input the mean of the macro variables. Then, the output will be generated in terms of the un-demeaned macro variables.

Output

  • Vector{Forecast}(, iteration)
  • t'th rows in predicted yields, predicted factors, and predicted TP are the corresponding predicted value at time size(yields, 1)+t.
  • Mathematically, it is a posterior samples from future observation|past observation,scenario.
source
TermStructureModels.dcurvature_dτMethod
dcurvature_dτ(τ; kappaQ)

This function calculate the first derivative of the curvature factor loading w.r.t. the maturity.

Input

  • kappaQ: The decay parameter
  • τ: The maturity that the derivative is calculated

Output

  • the first derivative of the curvature factor loading w.r.t. the maturity
source
TermStructureModels.erase_nonstationary_paramMethod
erase_nonstationary_param(saved_params)

It filters out posterior samples that implies an unit root VAR system. Only stationary posterior samples remain.

Input

  • saved_params is the first output of function posterior_sampler.

Output(2):

stationary samples, acceptance rate(%)

  • The second output indicates how many posterior samples remain.
source
TermStructureModels.fitted_YieldCurveMethod
fitted_YieldCurve(τ0, saved_latent_params::Vector{LatentSpace}; data_scale=1200)

It generates a fitted yield curve.

Input

  • τ0 is a set of maturities of interest. τ0 does not need to be the same as the one used for the estimation.
  • saved_latent_params is a transformed posterior sample using function latentspace.

Output

  • Vector{YieldCurve}(,# of iteration)
  • yields and latents contain initial observations.
source
TermStructureModels.generativeMethod
generative(T, dP, tau_n, p, noise::Float64; kappaQ, kQ_infty, KPXF, GPXFXF, OmegaXFXF, data_scale=1200)

This function generate a simulation data given parameters. Note that all parameters are the things in the latent factor state space (that is, parameters in struct LatentSpace). There is some differences in notations because it is hard to express mathcal letters in VScode. So, mathcal{F} in my paper is expressed in F in the VScode. And, "F" in my paper is expressed as XF.

Input:

  • noise = variance of the measurement errors

Output(3)

yields, latents, macros

  • yields = Matrix{Float64}(obs,T,length(tau_n))
  • latents = Matrix{Float64}(obs,T,dimQ())
  • macros = Matrix{Float64}(obs,T,dP - dimQ())
source
TermStructureModels.hessianFunction
hessian(f, x, index=[])

It calculates the Hessian matrix of a scalar function f at x. If index is not empty, it calculates the Hessian matrix of the function with respect to the selected variables.

source
TermStructureModels.ineff_factorMethod
ineff_factor(saved_params)

It returns inefficiency factors of each parameter.

Input

  • Vector{Parameter} from posterior_sampler

Output

  • Estimated inefficiency factors are in Tuple(kappaQ, kQ_infty, gamma, SigmaO, varFF, phi). For example, if you want to load an inefficiency factor of phi, you can use Output.phi.
  • If fix_const_PC1==true in your optimized struct Hyperparameter, Output.phi[1,1] can be weird. So you should ignore it.
source
TermStructureModels.isstationaryMethod
isstationary(GPFF)

It checks whether a reduced VAR matrix has unit roots. If there is at least one unit root, return is false.

Input

  • GPFF should not include intercepts. Also, GPFF is dP by dP*p matrix that the coefficient at lag 1 comes first, and the lag p slope matrix comes last.

Output

  • boolean
source
TermStructureModels.latentspaceMethod
latentspace(saved_params, yields, tau_n; data_scale=1200)

This function translates the principal components state space into the latent factor state space.

Input

  • data_scale::scalar: In typical affine term structure model, theoretical yields are in decimal and not annualized. But, for convenience(public data usually contains annualized percentage yields) and numerical stability, we sometimes want to scale up yields, so want to use (data_scale*theoretical yields) as variable yields. In this case, you can use data_scale option. For example, we can set data_scale = 1200 and use annualized percentage monthly yields as yields.

Output

  • Vector{LatentSpace}(, iteration)
  • latent factors contain initial observations.
source
TermStructureModels.log_marginalMethod
log_marginal(PCs, macros, rho, tuned::Hyperparameter, tau_n, Wₚ; ψ=[], ψ0=[], medium_tau, kappaQ_prior_pr, fix_const_PC1)

This file calculates a value of our marginal likelihood. Only the transition equation is used to calculate it.

Input

  • tuned is a point where the marginal likelihood is evaluated.
  • ψ0 and ψ are multiplied with prior variances of coefficients of the intercept and lagged regressors in the orthogonalized transition equation. They are used for imposing zero prior variances. A empty default value means that you do not use this function. [ψ0 ψ][i,j] is corresponds to phi[i,j].

Output

  • the log marginal likelihood of the VAR system.
source
TermStructureModels.loglik_meaMethod
loglik_mea(yields, tau_n; kappaQ, kQ_infty, phi, varFF, SigmaO, data_scale)

This function generate a log likelihood of the measurement equation.

Output

  • the measurement equation part of the log likelihood
source
TermStructureModels.loglik_tranMethod
loglik_tran(PCs, macros; phi, varFF)

It calculate log likelihood of the transition equation.

Output

  • log likelihood of the transition equation.
source
TermStructureModels.phi_2_phi₀_CMethod
phi_2_phi₀_C(; phi)

It divide phi into the lagged regressor part and the contemporaneous regerssor part.

Output(3)

phi0, C = C0 + I, C0

  • phi0: coefficients for the lagged regressors
  • C: coefficients for the dependent variables when all contemporaneous variables are in the LHS of the orthogonalized equations. Therefore, the diagonals of C is ones. Note that since the contemporaneous variables get negative signs when they are at the RHS, the signs of C do not change whether they are at the RHS or LHS.
source
TermStructureModels.posterior_samplerMethod
posterior_sampler(yields, macros, tau_n, rho, iteration, tuned::Hyperparameter; medium_tau=collect(24:3:48), init_param=[], ψ=[], ψ0=[], gamma_bar=[], kappaQ_prior_pr=[], mean_kQ_infty=0, std_kQ_infty=0.1, fix_const_PC1=false, data_scale=1200)

This is a posterior distribution sampler.

Input

  • iteration: # of posterior samples
  • tuned: optimized hyperparameters used during estimation
  • init_param: starting point of the sampler. It should be a type of Parameter.

Output(2)

Vector{Parameter}(posterior, iteration), acceptance rate of the MH algorithm

source
TermStructureModels.prior_kappaQMethod
prior_kappaQ(medium_tau, pr)

The function derive the maximizer decay parameter kappaQ that maximize the curvature factor loading at each candidate medium-term maturity. And then, it impose a discrete prior distribution on the maximizers with a prior probability vector pr.

Input

  • medium_tau::Vector(candidate medium maturities, # of candidates)
  • pr::Vector(probability, # of candidates)

Output

  • discrete prior distribution that has a support of the maximizers kappaQ
source
TermStructureModels.reducedformMethod
reducedform(saved_params, yields, macros, tau_n; data_scale=1200)

It converts posterior samples in terms of the reduced form VAR parameters.

Input

  • saved_params is the first output of function posterior_sampler.

Output

  • Posterior samples in terms of struct ReducedForm
source
TermStructureModels.scenario_analysisMethod
scenario_analysis(S::Vector, τ, horizon, saved_params, yields, macros, tau_n; mean_macros::Vector=[], data_scale=1200)

Input

scenarios, a result of the posterior sampler, and data

  • S[t] = conditioned scenario at time size(yields, 1)+t.
    • Set S = [] if you need an unconditional prediction.
    • If you are conditionaing a scenario, I assume S = Vector{Scenario}.
  • τ is a vector of maturities that term premiums of interest has.
  • horizon: maximum length of the predicted path. It should not be small than length(S).
  • saved_params: the first output of function posterior_sampler.
  • mean_macros::Vector: If you demeaned macro variables, you can input the mean of the macro variables. Then, the output will be generated in terms of the un-demeaned macro variables.

Output

  • Vector{Forecast}(, iteration)
  • t'th rows in predicted yields, predicted factors, and predicted TP are the corresponding predicted value at time size(yields, 1)+t.
  • Mathematically, it is a posterior distribution of E[future obs|past obs, scenario, parameters].
source
TermStructureModels.term_premiumMethod
term_premium(τ, tau_n, saved_params, yields, macros; data_scale=1200)

This function generates posterior samples of the term premiums.

Input

  • maturity of interest τ for Calculating TP
  • saved_params from function posterior_sampler

Output

  • Vector{TermPremium}(, iteration)
  • Outputs exclude initial observations.
source
TermStructureModels.tuning_hyperparameterMethod
tuning_hyperparameter(yields, macros, tau_n, rho; populationsize=50, maxiter=10_000, medium_tau=collect(24:3:48), upper_q=[1 1; 1 1; 10 10; 100 100], mean_kQ_infty=0, std_kQ_infty=0.1, upper_nu0=[], mean_phi_const=[], fix_const_PC1=false, upper_p=18, mean_phi_const_PC1=[], data_scale=1200, kappaQ_prior_pr=[], init_nu0=[], is_pure_EH=false)

It optimizes our hyperparameters by maximizing the marginal likelhood of the transition equation. Our optimizer is a differential evolutionary algorithm that utilizes bimodal movements in the eigen-space(Wang, Li, Huang, and Li, 2014) and the trivial geography(Spector and Klein, 2006).

Input

  • When we compare marginal likelihoods between models, the data for the dependent variable should be the same across the models. To achieve that, we set a period of dependent variable based on upper_p. For example, if upper_p = 3, yields[4:end,:] and macros[4:end,:] are the data for our dependent variable. yields[1:3,:] and macros[1:3,:] are used for setting initial observations for all lags.
  • populationsize and maxiter are options for the optimizer.
    • populationsize: the number of candidate solutions in each generation
    • maxtier: the maximum number of iterations
  • The lower bounds for q and nu0 are 0 and dP+2.
  • The upper bounds for q, nu0 and VAR lag can be set by upper_q, upper_nu0, upper_p.
    • Our default option for upper_nu0 is the time-series length of the data.
  • If you use our default option for mean_phi_const,
    1. mean_phi_const[dQ+1:end] is a zero vector.
    2. mean_phi_const[1:dQ] is calibrated to make a prior mean of λₚ a zero vector.
    3. After step 2, mean_phi_const[1] is replaced with mean_phi_const_PC1 if it is not empty.
  • mean_phi_const = Matrix(your prior, dP, upper_p)
  • mean_phi_const[:,i] is a prior mean for the VAR(i) constant. Therefore mean_phi_const is a matrix only in this function. In other functions, mean_phi_const is a vector for the orthogonalized VAR system with your selected lag.
  • When fix_const_PC1==true, the first element in a constant term in our orthogonalized VAR is fixed to its prior mean during the posterior sampling.
  • data_scale::scalar: In typical affine term structure model, theoretical yields are in decimal and not annualized. But, for convenience(public data usually contains annualized percentage yields) and numerical stability, we sometimes want to scale up yields, so want to use (data_scale*theoretical yields) as variable yields. In this case, you can use data_scale option. For example, we can set data_scale = 1200 and use annualized percentage monthly yields as yields.
  • is_pure_EH::Bool: When mean_phi_const=[], is_pure_EH=false sets mean_phi_const to zero vectors. Otherwise, mean_phi_const is set to imply the pure expectation hypothesis under mean_phi_const=[].

Output(2)

optimized Hyperparameter, optimization result

  • Be careful that we minimized the negative log marginal likelihood, so the second output is about the minimization problem.
source

Internal Functions

TermStructureModels.AₚMethod
Aₚ(Aₓ_, Bₓ_, T0P_, Wₒ)

Input

  • Aₓ_, Bₓ_, and T0P_ are outputs of function Aₓ, Bₓ, and T0P, respectively.

Output

  • Aₚ
source
TermStructureModels.MinnesotaMethod
Minnesota(l, i, j; q, nu0, Omega0, dQ=[])

It return unscaled prior variance of the Minnesota prior.

Input

  • lag l, dependent variable i, regressor j in the VAR(p)
  • q[:,1] and q[:,2] are [own, cross, lag, intercept] shrikages for the first dQ and remaining dP-dQ equations, respectively.
  • nu0(d.f.), Omega0(scale): Inverse-Wishart prior for the error-covariance matrix of VAR(p).

Output

  • Minnesota part in the prior variance
source
TermStructureModels.NIG_NIGMethod
NIG_NIG(y, X, β₀, B₀, α₀, δ₀)

Normal-InverseGamma-Normal-InverseGamma update

  • prior: β|σ² ~ MvNormal(β₀,σ²B₀), σ² ~ InverseGamma(α₀,δ₀)
  • likelihood: y|β,σ² = Xβ + MvNormal(zeros(T,1),σ²I(T))

Output(2)

β, σ²

  • posterior sample
source
TermStructureModels.PCs_2_latentsMethod
PCs_2_latents(yields, tau_n; kappaQ, kQ_infty, KPF, GPFF, OmegaFF, data_scale)

Notation XF is for the latent factor space and notation F is for the PC state space.

Input

  • data_scale::scalar: In typical affine term structure model, theoretical yields are in decimal and not annualized. But, for convenience(public data usually contains annualized percentage yields) and numerical stability, we sometimes want to scale up yields, so want to use (data_scale*theoretical yields) as variable yields. In this case, you can use data_scale option. For example, we can set data_scale = 1200 and use annualized percentage monthly yields as yields.

Output(6)

latent, kappaQ, kQ_infty, KPXF, GPXFXF, OmegaXFXF

  • latent factors contain initial observations.
source
TermStructureModels.T0PMethod
T0P(T1X_, Aₓ_, Wₚ, c)

Input

  • T1X_ and Aₓ_ are outputs of function T1X and Aₓ, respectively. c is a sample mean of PCs.

Output

  • T0P
source
TermStructureModels._termPremiumMethod
_termPremium(τ, PCs, macros, bτ_, T0P_, T1X_; kappaQ, kQ_infty, KPF, GPFF, ΩPP, data_scale)

This function calculates a term premium for maturity τ.

Input

  • data_scale::scalar = In typical affine term structure model, theoretical yields are in decimal and not annualized. But, for convenience(public data usually contains annualized percentage yields) and numerical stability, we sometimes want to scale up yields, so want to use (data_scale*theoretical yields) as variable yields. In this case, you can use data_scale option. For example, we can set data_scale = 1200 and use annualized percentage monthly yields as yields.

Output(4)

TP, timevarying_TP, const_TP, jensen

  • TP: term premium of maturity τ
  • timevarying_TP: contributions of each [PCs macros] on TP at each time $t$ (row: time, col: variable)
  • const_TP: Constant part of TP
  • jensen: Jensen's Ineqaulity part in TP
  • Output excludes the time period for the initial observations.
source
TermStructureModels.aτMethod
aτ(N, bτ_, tau_n, Wₚ; kQ_infty, ΩPP, data_scale)
aτ(N, bτ_; kQ_infty, ΩXX, data_scale)

The function has two methods(multiple dispatch).

Input

  • When Wₚ ∈ arguments: It calculates using ΩPP.
  • Otherwise: It calculates using ΩXX = OmegaXFXF[1:dQ, 1:dQ], so parameters are in the latent factor space. So, we do not need Wₚ.
  • bτ_ is an output of function .
  • data_scale::scalar: In typical affine term structure model, theoretical yields are in decimal and not annualized. But, for convenience(public data usually contains annualized percentage yields) and numerical stability, we sometimes want to scale up yields, so want to use (data_scale*theoretical yields) as variable yields. In this case, you can use data_scale option. For example, we can set data_scale = 1200 and use annualized percentage monthly yields as yields.

Output

  • Vector(Float64)(aτ,N)
  • For i'th maturity, Output[i] is the corresponding .
source
TermStructureModels.bτMethod
bτ(N; kappaQ, dQ)

It solves the difference equation for .

Output

  • for maturity i, bτ[:, i] is a vector of factor loadings.
source
TermStructureModels.jensens_inequalityMethod
jensens_inequality(τ, bτ_, T1X_; ΩPP, data_scale)

This function evaluate the Jensen's Ineqaulity term. All term is invariant with respect to the data_scale, except for this Jensen's inequality term. So, we need to scale down the term by data_scale.

Output

  • Jensen's Ineqaulity term for of maturity τ.
source
TermStructureModels.loglik_mea2Method
loglik_mea2(yields, tau_n; kappaQ, kQ_infty, phi, varFF, SigmaO, data_scale)

This function is the same as loglik_mea but it requires ΩPP as an input.

source
TermStructureModels.longvarMethod
longvar(v)

It calculates the long-run variance of v using the quadratic spectral window with selection of bandwidth of Andrews(1991). We use the AR(1) approximation.

Input

  • Time-series Vector v

Output

  • Estimated 2πh(0) of v, where h(x) is the spectral density of v at x.
source
TermStructureModels.post_SigmaOMethod
post_SigmaO(yields, tau_n; kappaQ, kQ_infty, ΩPP, gamma, p, data_scale)

Posterior sampler for the measurement errors

Output

  • Vector{Dist}(IG, N-dQ)
source
TermStructureModels.post_kappaQMethod
post_kappaQ(yields, prior_kappaQ_, tau_n; kQ_infty, phi, varFF, SigmaO, data_scale)

Input

  • prior_kappaQ_ is a output of function prior_kappaQ.

Output

  • Full conditional posterior distribution
source
TermStructureModels.post_kappaQ2Method
post_kappaQ2(yields, prior_kappaQ_, tau_n; kappaQ, kQ_infty, phi, varFF, SigmaO, data_scale, x_mode, inv_x_hess)

It conducts the Metropolis-Hastings algorithm for the reparameterized kappaQ under the unrestricted JSZ form. x_mode and inv_x_hess constitute the mean and variance of the Normal proposal distribution.

  • Reparameterization: kappaQ[1] = x[1] kappaQ[2] = x[1] + x[2] kappaQ[3] = x[1] + x[2] + x[3]
  • Jacobian: [1 0 0 1 1 0 1 1 1]
  • The determinant = 1
source
TermStructureModels.post_phi_varFFMethod
post_phi_varFF(yields, macros, mean_phi_const, rho, prior_kappaQ_, tau_n; phi, ψ, ψ0, varFF, q, nu0, Omega0, kappaQ, kQ_infty, SigmaO, fix_const_PC1, data_scale)

Full-conditional posterior sampler for phi and varFF

Input

  • prior_kappaQ_ is a output of function prior_kappaQ.
  • When fix_const_PC1==true, the first element in a constant term in our orthogonalized VAR is fixed to its prior mean during the posterior sampling.

Output(3)

phi, varFF, isaccept=Vector{Bool}(undef, dQ)

  • It gives a posterior sample.
source
TermStructureModels.prior_CMethod
prior_C(; Omega0::Vector)

We translate the Inverse-Wishart prior to a series of the Normal-Inverse-Gamma (NIG) prior distributions. If the dimension is dₚ, there are dₚ NIG prior distributions. This function generates Normal priors.

Output:

  • unscaled prior of C in the LDLt decomposition, OmegaFF = inv(C)*diagm(varFF)*inv(C)'

Important note

prior variance for C[i,:] = varFF[i]*variance of output[i,:]

source
TermStructureModels.prior_gammaMethod
prior_gamma(yields, p)

There is a hierarchcal structure in the measurement equation. The prior means of the measurement errors are gamma[i] and each gamma[i] follows Gamma(1,gamma_bar) distribution. This function decides gamma_bar empirically. OLS is used to estimate the measurement equation and then a variance of residuals is calculated for each maturities. An inverse of the average residual variances is set to gamma_bar.

Output

  • hyperparameter gamma_bar
source
TermStructureModels.prior_phi0Method
prior_phi0(mean_phi_const, rho::Vector, prior_kappaQ_, tau_n, Wₚ; ψ0, ψ, q, nu0, Omega0, fix_const_PC1)

This part derives the prior distribution for coefficients of the lagged regressors in the orthogonalized VAR.

Input

  • prior_kappaQ_ is a output of function prior_kappaQ.
  • When fix_const_PC1==true, the first element in a constant term in our orthogonalized VAR is fixed to its prior mean during the posterior sampling.

Output

  • Normal prior distributions on the slope coefficient of lagged variables and intercepts in the orthogonalized equation.
  • Output[:,1] for intercepts, Output[:,1+1:1+dP] for the first lag, Output[:,1+dP+1:1+2*dP] for the second lag, and so on.

Important note

prior variance for phi[i,:] = varFF[i]*var(output[i,:])

source
TermStructureModels.prior_varFFMethod
prior_varFF(; nu0, Omega0::Vector)

We translate the Inverse-Wishart prior to a series of the Normal-Inverse-Gamma (NIG) prior distributions. If the dimension is dₚ, there are dₚ NIG prior distributions. This function generates Inverse-Gamma priors.

Output:

  • prior of varFF in the LDLt decomposition,OmegaFF = inv(C)*diagm(varFF)*inv(C)'
  • Each element in the output follows Inverse-Gamma priors.
source
TermStructureModels.yphi_XphiMethod
yphi_Xphi(PCs, macros, p)

This function generate the dependent variable and the corresponding regressors in the orthogonalized transition equation.

Output(4)

yphi, Xphi = [ones(T - p) Xphi_lag Xphi_contemporaneous], [ones(T - p) Xphi_lag], Xphi_contemporaneous

  • yphi and Xphi is a full matrix. For i'th equation, the dependent variable is yphi[:,i] and the regressor is Xphi.
  • Xphi is same to all orthogonalized transtion equations. The orthogonalized equations are different in terms of contemporaneous regressors. Therefore, the corresponding regressors in Xphi should be excluded. The form of parameter phi do that task by setting the coefficients of the excluded regressors to zeros. In particular, for last dP by dP block in phi, the diagonals and the upper diagonal elements should be zero.
source