API documentation

Index

Exported Functions

TermStructureModels.ForecastType
@kwdef struct Forecast <: PosteriorSample

This struct contains the results of the scenario analysis, the conditional prediction for yields, factors = [PCs macros], and term premiums.

  • yields
  • factors
  • TP: term premium forecasts
  • EH: estimated expectation hypothesis component
source
TermStructureModels.HyperparameterType
@kwdef struct Hyperparameter
  • p::Int
  • q::Matrix
  • nu0
  • Omega0::Vector
  • mean_phi_const::Vector = zeros(length(Omega0)): This is the prior mean of the constant term in the VAR.
source
TermStructureModels.LatentSpaceType
@kwdef struct LatentSpace <: PosteriorSample

When the model goes to the JSZ latent factor space, the statistical parameters in struct Parameter are also transformed. This struct contains the transformed parameters. Specifically, the transformation is latents[t,:] = T0P_ + inv(T1X)*PCs[t,:].

In the latent factor space, the transition equation is data[t,:] = KPXF + GPXFXF*vec(data[t-1:-1:t-p,:]') + MvNormal(O,OmegaXFXF), where data = [latent macros].

  • latents::Matrix
  • kappaQ
  • kQ_infty
  • KPXF::Vector
  • GPXFXF::Matrix
  • OmegaXFXF::Matrix
source
TermStructureModels.ParameterType
@kwdef struct Parameter <: PosteriorSample

This struct contains the statistical parameters of the model that are sampled from function posterior_sampler.

  • kappaQ
  • kQ_infty::Float64
  • phi::Matrix{Float64}
  • varFF::Vector{Float64}
  • SigmaO::Vector{Float64}
  • gamma::Vector{Float64}
source
TermStructureModels.Parameter_NUTSType
@kwdef struct Parameter_NUTS <: PosteriorSample

This struct contains the statistical parameters of the model that are sampled from function posterior_NUTS.

  • q
  • nu0
  • kappaQ
  • kQ_infty::Float64
  • phi::Matrix{Float64}
  • varFF::Vector{Float64}
  • SigmaO::Vector{Float64}
  • gamma::Vector{Float64}
source
TermStructureModels.ReducedFormType
@kwdef struct ReducedForm <: PosteriorSample

This struct contains the statistical parameters in terms of the reduced form VAR(p) in P-dynamics. lambdaP and LambdaPF are parameters in the market prices of risks equation, and they only contain the first dQ non-zero equations.

  • kappaQ
  • kQ_infty
  • KPF
  • GPFF
  • OmegaFF::Matrix
  • SigmaO::Vector
  • lambdaP
  • LambdaPF
  • mpr::Matrix(market prices of risks, T, dP)
source
TermStructureModels.ScenarioType
@kwdef struct Scenario

This struct contains scenarios to be conditioned in the scenario analysis. When y = [yields; macros] is an observed vector in the measurement equation, Scenario.combinations*y = Scenario.values constitutes the scenario at a specific time. Vector{Scenario} is used to describe a time-series of scenarios.

combinations and values should be Matrix and Vector. If values is a scalar, combinations would be a matrix with one row vector and values should be one-dimensional vector, for example [values].

  • combinations::Matrix
  • values::Vector
source
TermStructureModels.TermPremiumType
@kwdef struct TermPremium <: PosteriorSample

The yields are decomposed into the term premium(TP) and the expectation hypothesis component(EH). Each component has constant terms(const_TP and const_EH) and time-varying components(timevarying_TP and timevarying_EH). factorloading_EH and factorloading_TP are coefficients of the pricing factors for the time varying components. Each column of the outputs indicates the results for each maturity.

The time-varying components are not stored in TermPremium, and they are the separate outputs in function term_premium.

  • TP
  • EH
  • factorloading_TP
  • factorloading_EH
  • const_TP
  • const_EH
source
TermStructureModels.YieldCurveType
@kwdef struct YieldCurve <: PosteriorSample

This struct contains the fitted yield curve. yields[t,:] = intercept + slope*latents[t,:] holds.

  • latents::Matrix: latent pricing factors in LatentSpace
  • yields
  • intercept
  • slope
source
Base.getindexMethod
getindex(x::PosteriorSample, c::Symbol)

For struct <: PosteriorSample, struct[:name] calls objects in struct.

source
Base.getindexMethod
getindex(x::Vector{<:PosteriorSample}, c::Symbol)

For struct <: PosteriorSample, struct[:name] calls objects in struct. Output[i] = the i-th posterior sample

source
Statistics.meanMethod
mean(x::Vector{<:PosteriorSample})

Output[:variable name] returns the corresponding posterior mean.

source
Statistics.medianMethod
median(x::Vector{<:PosteriorSample})

Output[:variable name] returns the corresponding posterior median.

source
Statistics.quantileMethod
quantile(x::Vector{<:PosteriorSample}, q)

Output[:variable name] returns a quantile of the corresponding posterior distribution.

source
Statistics.stdMethod
std(x::Vector{<:PosteriorSample})

Output[:variable name] returns the corresponding posterior standard deviation.

source
Statistics.varMethod
var(x::Vector{<:PosteriorSample})

Output[:variable name] returns the corresponding posterior variance.

source
TermStructureModels.AR_res_varMethod
AR_res_var(TS::Vector, p)

This function derives the MLE error variance estimate of an AR(p) model.

Input

  • Univariate time series TS and lag p

Output(2)

Residual variance estimate, AR(p) coefficients

source
TermStructureModels.GQ_XXMethod
GQ_XX(; kappaQ)

kappaQ governs the conditional mean of the Q-dynamics of X, and its slope matrix has a restricted form. This function shows that restricted form.

Output

  • slope matrix of the Q-conditional mean of X
source
TermStructureModels.LDLMethod
LDL(X)

This function generates a matrix decomposition called LDLt. X = L*D*L', where L is a lower triangular matrix and D is a diagonal. How to conduct it can be found at Wikipedia.

Input

  • Decomposed Object, X

Output(2)

L, D

  • Decomposed result is X = L*D*L'
source
TermStructureModels.PCAMethod
PCA(yields, p; pca_loadings=[], dQ=[])

This function derives the principal components from yields.

Input

  • yields[p+1:end, :] is used to construct the affine transformation, and then all yields[:,:] are transformed into the principal components.
  • pca_loadings=Matrix{, dQ, size(yields, 2)} stores the loadings for the first dQ principal components (so principal_components = yields * pca_loadings'), and you may optionally provide these loadings externally; if omitted, the package computes them internally via PCA. 

Output(4)

PCs, OCs, Wₚ, Wₒ, mean_PCs

  • PCs, OCs: first dQ and the remaining principal components
  • Wₚ, Wₒ: the rotation matrix for PCs and OCs, respectively
  • mean_PCs: the mean of PCs before being demeaned.
  • PCs are demeaned.
source
TermStructureModels.calibrate_mean_phi_constMethod
calibrate_mean_phi_const(mean_kQ_infty, std_kQ_infty, nu0, yields, macros, tau_n, p; mean_phi_const_PCs=[], medium_tau=collect(24:3:48), iteration=1000, data_scale=1200, kappaQ_prior_pr=[], τ=[], pca_loadings=[])

This function calibrates a prior mean of the first dQ constant terms in the VAR. Adjust your prior setting based on the prior samples in the outputs.

Input

  • mean_phi_const_PCs is your prior mean of the first dQ constants. The default option sets it as a zero vector.
  • iteration is the number of prior samples.
  • τ::scalar is a maturity for calculating the constant part in the term premium.
    • If τ is empty, the function does not sample the prior distribution of the constant part in the term premium.
  • pca_loadings=Matrix{, dQ, size(yields, 2)} stores the loadings for the first dQ principal components (so principal_components = yields * pca_loadings'), and you may optionally provide these loadings externally; if omitted, the package computes them internally via PCA.

Output(2)

prior_λₚ, prior_TP

  • samples from the prior distribution of λₚ
  • prior samples of the constant part in the τ-month term premium
source
TermStructureModels.conditional_expectationMethod
conditional_expectation(S::Vector, tau, horizon, saved_params, yields, macros, tau_n; baseline=[], mean_macros::Vector=[], data_scale=1200, pca_loadings=[], is_parallel=false)

Input

scenarios, a result of the posterior sampler, and data

  • S[t] = conditioned scenario at time size(yields, 1)+t.
    • Set S = [] if you need an unconditional prediction.
    • If you are conditioning a scenario, I assume S = Vector{Scenario}.
  • tau is a vector of maturities that term premiums of interest has.
  • horizon: maximum length of the predicted path. It should not be smaller than length(S).
  • saved_params: the first output of function posterior_sampler.
  • baseline::Vector{Forecast}: baseline is the output of conditional_expectation. It is generally set as the result when S is empty. When provided, the scenario in S should be specified as deviations from baseline (i.e., the scenario path is expressed relative to baseline), and the output forecasts will also be returned as deviations from baseline.
  • mean_macros::Vector: If you demeaned macro variables, you can input the mean of the macro variables. Then, the output will be generated in terms of the un-demeaned macro variables.
  • If mean_macros was used as an input when deriving baseline with this function, mean_macros should also be included as an input when using baseline as an input. Conversely, if mean_macros was not used as an input when deriving baseline, it should not be included as an input when using baseline.
  • pca_loadings=Matrix{, dQ, size(yields, 2)} stores the loadings for the first dQ principal components (so principal_components = yields * pca_loadings'), and you may optionally provide these loadings externally; if omitted, the package computes them internally via PCA.
  • is_parallel enables multi-threaded parallel computation when set to true.

Output

  • Vector{Forecast}(, iteration)
  • t-th rows in predicted yields, predicted factors, predicted TP, and predicted EH are the corresponding predicted value at time size(yields, 1)+t.
  • Mathematically, it is a posterior distribution of E[future obs|past obs, scenario, parameters], or E[future obs|past obs, scenario, parameters] - E[future obs|past obs, baseline, parameters] when baseline is provided.
source
TermStructureModels.conditional_forecastMethod
conditional_forecast(S::Vector, tau, horizon, saved_params, yields, macros, tau_n; baseline=[], mean_macros::Vector=[], data_scale=1200, pca_loadings=[], is_parallel=false)

Input

scenarios, a result of the posterior sampler, and data

  • S[t] = conditioned scenario at time size(yields, 1)+t.
    • If we need an unconditional prediction, S = [].
    • If you are conditioning a scenario, I assume S = Vector{Scenario}.
  • tau is a vector. The term premium of tau[i]-bond is forecasted for each i.
    • If tau is set to [], the term premium is not forecasted.
  • horizon: maximum length of the predicted path. It should not be smaller than length(S).
  • saved_params: the first output of function posterior_sampler.
  • baseline::Vector{Forecast}: baseline is the output of conditional_forecast. It is generally set as the result when S is empty. When provided, the scenario in S should be specified as deviations from baseline (i.e., the scenario path is expressed relative to baseline), and the output forecasts will also be returned as deviations from baseline.
  • mean_macros::Vector: If you demeaned macro variables, you can input the mean of the macro variables. Then, the output will be generated in terms of the un-demeaned macro variables.
  • If mean_macros was used as an input when deriving baseline with this function, mean_macros should also be included as an input when using baseline as an input. Conversely, if mean_macros was not used as an input when deriving baseline, it should not be included as an input when using baseline.
  • pca_loadings=Matrix{, dQ, size(yields, 2)} stores the loadings for the first dQ principal components (so principal_components = yields * pca_loadings'), and you may optionally provide these loadings externally; if omitted, the package computes them internally via PCA. 
  • is_parallel enables multi-threaded parallel computation when set to true.

Output

  • Vector{Forecast}(, iteration)
  • t-th rows in predicted yields, predicted factors, predicted TP, and predicted EH are the corresponding predicted value at time size(yields, 1)+t.
  • Mathematically, it is a posterior sample from future observation|past observation,scenario, or future observation|past observation,scenario minus future observation|past observation,baseline when baseline is provided.
source
TermStructureModels.dcurvature_dτMethod
dcurvature_dτ(τ; kappaQ)

This function calculates the first derivative of the curvature factor loading w.r.t. the maturity.

Input

  • kappaQ: The decay parameter
  • τ: The maturity at which the derivative is calculated

Output

  • the first derivative of the curvature factor loading w.r.t. the maturity
source
TermStructureModels.erase_nonstationary_paramMethod
erase_nonstationary_param(saved_params::Vector{Parameter_NUTS}; threshold=1)

This function filters out posterior samples that imply a unit root VAR system. Only stationary posterior samples remain.

Input

  • saved_params is the output of function posterior_NUTS.
  • Posterior samples with eigenvalues of the P-system greater than threshold are removed.

Output(2):

stationary samples, acceptance rate(%)

  • The second output indicates how many posterior samples remain.
source
TermStructureModels.erase_nonstationary_paramMethod
erase_nonstationary_param(saved_params::Vector{Parameter}; threshold=1)

This function filters out posterior samples that imply a unit root VAR system. Only stationary posterior samples remain.

Input

  • saved_params is the first output of function posterior_sampler.
  • Posterior samples with eigenvalues of the P-system greater than threshold are removed.

Output(2):

stationary samples, acceptance rate(%)

  • The second output indicates how many posterior samples remain.
source
TermStructureModels.fitted_yieldcurveMethod
fitted_yieldcurve(tau_vec, saved_latent_params::Vector{LatentSpace}; data_scale=1200, is_parallel=false)

This function generates the fitted yield curve.

Input

  • tau_vec is a set of maturities of interest. tau_vec does not need to be the same as the one used for the estimation.
  • saved_latent_params is a transformed posterior sample using function latentspace.
  • is_parallel enables multi-threaded parallel computation when set to true.

Output

  • Vector{YieldCurve}(,# of iteration)
  • yields and latents contain initial observations.
source
TermStructureModels.generativeMethod
generative(T, dP, tau_n, p, noise::Float64; kappaQ, kQ_infty, KPXF, GPXFXF, OmegaXFXF, data_scale=1200)

This function generates simulation data given parameters. Note that all parameters are in the latent factor state space (i.e., parameters in struct LatentSpace). There are some differences in notation because it is difficult to express mathcal letters in VSCode. Therefore, mathcal{F} in the paper is expressed as F in VSCode, and "F" in the paper is expressed as XF.

Input

  • noise: Variance of the measurement errors

Output(3)

yields, latents, macros

  • yields = Matrix{Float64}(obs,T,length(tau_n))
  • latents = Matrix{Float64}(obs,T,dimQ())
  • macros = Matrix{Float64}(obs,T,dP - dimQ())
source
TermStructureModels.hessianFunction
hessian(f, x, index=[])

This function calculates the Hessian matrix of a scalar function f at x. If index is not empty, it calculates the Hessian matrix of the function with respect to the selected variables.

source
TermStructureModels.ineff_factorMethod
ineff_factor(saved_params::Vector{Parameter_NUTS}; is_parallel=false)

This function returns the inefficiency factors for each parameter.

Input

  • Vector{Parameter_NUTS} from posterior_NUTS
  • is_parallel enables multi-threaded parallel computation when set to true.

Output

  • Estimated inefficiency factors are returned as a Tuple(q, nu0, kappaQ, kQ_infty, gamma, SigmaO, varFF, phi). For example, if you want to access the inefficiency factor of phi, you can use Output.phi.
  • If fix_const_PC1==true in your optimized Hyperparameter struct, Output.phi[1,1] may be unreliable and should be ignored.
source
TermStructureModels.ineff_factorMethod
ineff_factor(saved_params::Vector{Parameter}; is_parallel=false)

This function returns the inefficiency factors for each parameter.

Input

  • Vector{Parameter} from posterior_sampler
  • is_parallel enables multi-threaded parallel computation when set to true.

Output

  • Estimated inefficiency factors are returned as a Tuple(kappaQ, kQ_infty, gamma, SigmaO, varFF, phi). For example, if you want to access the inefficiency factor of phi, you can use Output.phi.
  • If fix_const_PC1==true in your optimized Hyperparameter struct, Output.phi[1,1] may be unreliable and should be ignored.
source
TermStructureModels.isstationaryMethod
isstationary(GPFF; threshold)

This function checks whether a reduced VAR matrix has unit roots. If there is at least one unit root, the return is false.

Input

  • GPFF should not include intercepts. Also, GPFF is a dP by dP*p matrix where the coefficient at lag 1 comes first, and the lag p slope matrix comes last.
  • Posterior samples with eigenvalues of the P-system greater than threshold are removed. Typically, threshold is set to 1.

Output

  • boolean
source
TermStructureModels.latentspaceMethod
latentspace(saved_params, yields, tau_n; data_scale=1200, pca_loadings=[], is_parallel=false)

This function translates the principal components state space into the latent factor state space.

Input

  • data_scale::scalar: In typical affine term structure models, theoretical yields are in decimal and not annualized. However, for convenience (public data usually contains annualized percentage yields) and numerical stability, we sometimes want to scale up yields, so want to use (data_scale*theoretical yields) as variable yields. In this case, you can use the data_scale option. For example, we can set data_scale = 1200 and use annualized percentage monthly yields as yields.
  • pca_loadings=Matrix{, dQ, size(yields, 2)} stores the loadings for the first dQ principal components (so principal_components = yields * pca_loadings'), and you may optionally provide these loadings externally; if omitted, the package computes them internally via PCA. 
  • is_parallel enables multi-threaded parallel computation when set to true.

Output

  • Vector{LatentSpace}(, iteration)
  • Latent factors contain initial observations.
source
TermStructureModels.log_marginalMethod
log_marginal(PCs, macros, rho, tuned::Hyperparameter, tau_n, Wₚ; psi=[], psi_const=[], medium_tau, kappaQ_prior_pr, fix_const_PC1)

This file calculates a value of the marginal likelihood. Only the transition equation is used to calculate it.

Input

  • tuned is a point where the marginal likelihood is evaluated.
  • psi_const and psi are multiplied with prior variances of coefficients of the intercept and lagged regressors in the orthogonalized transition equation. They are used for imposing zero prior variances. An empty default value means that you do not use this function. [psi_const psi][i,j] corresponds to phi[:,1:1+dP*p][i,j].

Output

  • the log marginal likelihood of the VAR system.
source
TermStructureModels.loglik_meaMethod
loglik_mea(yields, tau_n; kappaQ, kQ_infty, phi, varFF, SigmaO, data_scale, pca_loadings)

This function generates the log likelihood of the measurement equation.

Output

  • the measurement equation part of the log likelihood
source
TermStructureModels.loglik_tranMethod
loglik_tran(PCs, macros; phi, varFF)

This function calculates the log likelihood of the transition equation.

Output

  • log likelihood of the transition equation.
source
TermStructureModels.phi_2_phi₀_CMethod
phi_2_phi₀_C(; phi)

This function divides phi into the lagged regressor part and the contemporaneous regressor part.

Output(3)

phi0, C = C0 + I, C0

  • phi0: coefficients for the lagged regressors
  • C: coefficients for the dependent variables when all contemporaneous variables are on the LHS of the orthogonalized equations. Therefore, the diagonals of C are ones. Note that since the contemporaneous variables get negative signs when they are on the RHS, the signs of C do not change whether they are on the RHS or LHS.
source
TermStructureModels.posterior_NUTSMethod
posterior_NUTS(p, yields, macros, tau_n, rho, NUTS_nadapt, iteration; init_param=[], prior_q, prior_nu0, psi=[], psi_const=[], gamma_bar=[], prior_mean_diff_kappaQ, prior_std_diff_kappaQ, mean_kQ_infty=0, std_kQ_infty=0.1, fix_const_PC1=false, data_scale=1200, pca_loadings=[], NUTS_target_acceptance_rate=0.65, NUTS_max_depth=10)

This function implements the NUTS-within-Gibbs sampler. Gibbs blocks that cannot be updated with conjugate priors are sampled using the NUTS sampler.

Input

  • p: The lag length of the VAR system
  • NUTS_nadapt: Number of iterations for tuning settings in the NUTS sampler. The warmup samples are included in the output, so you should discard them.
  • iteration: Number of posterior samples
  • init_param: Starting point of the sampler. It should be of type Parameter_NUTS.
  • prior_q: A 4 by 2 matrix that contains the prior distribution for q. All entries should be objects in Distributions.jl. For hyperparameters that do not need to be optimized, assigning a Dirac(::Float64) prior to the corresponding entry fixes that hyperparameter and optimizes only the remaining hyperparameters.
  • prior_nu0: The prior distribution for nu0 - (dP + 1). It should be an object in Distributions.jl.
  • psi_const and psi are multiplied with prior variances of coefficients of the intercept and lagged regressors in the orthogonalized transition equation. They are used for imposing zero prior variances. An empty default value means that you do not use this function. [psi_const psi][i,j] corresponds to phi[:,1:1+dP*p][i,j]. psi should be a (dP × dP*p) matrix.
  • prior_mean_diff_kappaQ and prior_std_diff_kappaQ are vectors that contain the means and standard deviations of the Normal distributions for [kappaQ[1]; diff(kappaQ)]. Once Normal priors are assigned to these parameters, the prior for kappaQ[1] is truncated to (0, 1), and the priors for diff(kappaQ) are truncated to (−1, 0).
  • pca_loadings=Matrix{, dQ, size(yields, 2)} stores the loadings for the first dQ principal components (so principal_components = yields * pca_loadings'), and you may optionally provide these loadings externally; if omitted, the package computes them internally via PCA. 
  • NUTS_target_acceptance_rate, NUTS_max_depth are the arguments of the NUTS sampler in AdvancedHMC.jl.

Output

Vector{Parameter_NUTS}(posterior, iteration)

source
TermStructureModels.posterior_samplerMethod
posterior_sampler(yields, macros, tau_n, rho, iteration, tuned::Hyperparameter; medium_tau=collect(24:3:48), init_param=[], psi=[], psi_const=[], gamma_bar=[], kappaQ_prior_pr=[], mean_kQ_infty=0, std_kQ_infty=0.1, fix_const_PC1=false, data_scale=1200, pca_loadings=[], kappaQ_proposal_mode=[])

This function samples from the posterior distribution.

Input

  • iteration: Number of posterior samples
  • tuned: Optimized hyperparameters used during estimation
  • init_param: Starting point of the sampler. It should be of type Parameter.
  • psi_const and psi are multiplied with prior variances of coefficients of the intercept and lagged regressors in the orthogonalized transition equation. They are used for imposing zero prior variances. An empty default value means that you do not use this function. [psi_const psi][i,j] corresponds to phi[:,1:1+dP*p][i,j]. psi should be a (dP × dP*p) matrix.
  • kappaQ_prior_pr is a vector of prior distributions for kappaQ under the JSZ model: each element specifies the prior for kappaQ[i] and must be provided as a Distributions.jl object. This option is only needed when using the JSZ model.
  • pca_loadings=Matrix{, dQ, size(yields, 2)} stores the loadings for the first dQ principal components (so principal_components = yields * pca_loadings'), and you may optionally provide these loadings externally; if omitted, the package computes them internally via PCA.
  • kappaQ_proposal_mode=Vector{, dQ} contains the center of the proposal distribution for kappaQ. If it is empty, it is optimized by MLE.

Output(2)

Vector{Parameter}(posterior, iteration), acceptance rate of the MH algorithm

source
TermStructureModels.prior_kappaQMethod
prior_kappaQ(medium_tau, pr)

This function derives the maximizer decay parameter kappaQ that maximizes the curvature factor loading at each candidate medium-term maturity. Then, it imposes a discrete prior distribution on the maximizers with a prior probability vector pr.

Input

  • medium_tau::Vector(candidate medium maturities, # of candidates)
  • pr::Vector(probability, # of candidates)

Output

  • discrete prior distribution that has a support of the maximizers kappaQ
source
TermStructureModels.reducedformMethod
reducedform(saved_params, yields, macros, tau_n; data_scale=1200, pca_loadings=[], is_parallel=false)

This function converts posterior samples to the reduced form VAR parameters.

Input

  • saved_params is the first output of function posterior_sampler.
  • pca_loadings=Matrix{, dQ, size(yields, 2)} stores the loadings for the first dQ principal components (so principal_components = yields * pca_loadings'), and you may optionally provide these loadings externally; if omitted, the package computes them internally via PCA.
  • is_parallel enables multi-threaded parallel computation when set to true.

Output

  • Posterior samples in terms of struct ReducedForm
source
TermStructureModels.term_premiumMethod
term_premium(tau_interest, tau_n, saved_params, yields, macros; data_scale=1200, pca_loadings=[], is_parallel=false)

This function generates posterior samples of the term premiums.

Input

  • Maturity of interest tau_interest for calculating TP
  • saved_params from function posterior_sampler
  • pca_loadings=Matrix{, dQ, size(yields, 2)} stores the loadings for the first dQ principal components (so principal_components = yields * pca_loadings'), and you may optionally provide these loadings externally; if omitted, the package computes them internally via PCA. 
  • is_parallel enables multi-threaded parallel computation when set to true.

Output(3)

saved_TP, saved_tv_TP, saved_tv_EH

  • saved_TP::Vector{TermPremium}(, iteration)
  • saved_tv_TP::Vector{Array}(, iteration)
  • saved_tv_EH::Vector{Array}(, iteration)
  • Both the term premiums and expectation hypothesis components are decomposed into the time-invariant part and time-varying part. For the maturity tau_interest[i] and j-th posterior sample, the time-varying parts are saved in saved_tv_TP[j][:, :, i] and saved_tv_EH[j][:, :, i]. The time-varying parts driven by the k-th pricing factor are stored in saved_tv_TP[j][:, k, i] and saved_tv_EH[j][:, k, i].
source
TermStructureModels.tuning_hyperparameterMethod
tuning_hyperparameter(yields, macros, tau_n, rho; populationsize=50, maxiter=10_000, medium_tau=collect(24:3:48), upper_q=[1 1; 1 1; 1 1; 4 4; 100 100], mean_kQ_infty=0, std_kQ_infty=0.1, upper_nu0=[], mean_phi_const=[], fix_const_PC1=false, upper_p=24, mean_phi_const_PC1=[], data_scale=1200, kappaQ_prior_pr=[], init_nu0=[], is_pure_EH=false, psi=[], psi_const=[], pca_loadings=[], prior_mean_diff_kappaQ=[], prior_std_diff_kappaQ=[], optimizer=:LBFGS, ml_tol=1.0, init_x=[])

This function optimizes the hyperparameters by maximizing the marginal likelihood of the transition equation.

Input

  • When comparing marginal likelihoods between models, the data for the dependent variable should be the same across models. To achieve this, we set the period of the dependent variable based on upper_p. For example, if upper_p = 3, yields[4:end,:] and macros[4:end,:] are the data for the dependent variable. yields[1:3,:] and macros[1:3,:] are used for setting initial observations for all lags.
  • optimizer: The optimization algorithm to use.
    • :LBFGS (default): Uses unconstrained LBFGS from Optim.jl with hybrid parameter transformations (exp for non-negativity, sigmoid for bounded parameters). Alternates between optimizing hyperparameters (with fixed lag) and selecting the best lag (with fixed hyperparameters) until convergence.
    • :BBO: Uses a differential evolutionary algorithm (BlackBoxOptim.jl). The lag and hyperparameters are optimized simultaneously.
  • ml_tol: Tolerance for parsimony in lag selection (only for :LBFGS). After finding the lag with the best marginal likelihood, the algorithm iteratively selects smaller lags if their marginal likelihood is within ml_tol of the best. This favors simpler models (smaller lags) when performance is comparable.
  • init_x: Initial values for hyperparameters and lag (only for :LBFGS). Should be a vector of length 12 in the format [vec(q); nu0-(dP+1); p]. If empty (default), uses [0.1, 0.1, 0.1, 2.0, 1.0, 0.1, 0.1, 0.1, 2.0, 1.0, 1.0, 1].
  • populationsize and maxiter are options for the optimizer.
    • populationsize: the number of candidate solutions in each generation (only for :BBO)
    • maxiter: the maximum number of iterations
  • The lower bounds for q and nu0 are 0 and dP+2.
  • The upper bounds for q, nu0, and VAR lag can be set by upper_q, upper_nu0, and upper_p.
    • The default option for upper_nu0 is the time-series length of the data.
  • If you use the default option for mean_phi_const,
    1. mean_phi_const[dQ+1:end] is a zero vector.
    2. mean_phi_const[1:dQ] is calibrated to make the prior mean of λₚ a zero vector.
    3. After step 2, mean_phi_const[1] is replaced with mean_phi_const_PC1 if it is not empty.
  • mean_phi_const = Matrix(your prior, dP, upper_p)
  • mean_phi_const[:,i] is the prior mean for the VAR(i) constant. Therefore, mean_phi_const is a matrix only in this function. In other functions, mean_phi_const is a vector for the orthogonalized VAR system with the selected lag.
  • When fix_const_PC1==true, the first element in the constant term in the orthogonalized VAR is fixed to its prior mean during posterior sampling.
  • data_scale::scalar: In a typical affine term structure model, theoretical yields are in decimals and not annualized. However, for convenience (public data usually contains annualized percentage yields) and numerical stability, we sometimes want to scale up yields and use (data_scale*theoretical yields) as the variable yields. In this case, you can use the data_scale option. For example, we can set data_scale = 1200 and use annualized percentage monthly yields as yields.
  • kappaQ_prior_pr is a vector of prior distributions for kappaQ under the JSZ model: each element specifies the prior for kappaQ[i] and must be provided as a Distributions.jl object. Alternatively, you can supply prior_mean_diff_kappaQ and prior_std_diff_kappaQ, which define means and standard deviations for Normal priors on [kappaQ[1]; diff(kappaQ)]; the implied Normal prior for each kappaQ[i] is then truncated to (0, 1). These options are only needed when using the JSZ model.
  • is_pure_EH::Bool: When mean_phi_const=[], is_pure_EH=false sets mean_phi_const to zero vectors. Otherwise, mean_phi_const is set to imply the pure expectation hypothesis under mean_phi_const=[].
  • psi_const and psi are multiplied with prior variances of coefficients of the intercept and lagged regressors in the orthogonalized transition equation. They are used for imposing zero prior variances. An empty default value means that you do not use this function. [psi_const psi][i,j] corresponds to phi[:,1:1+dP*p][i,j]. psi is a (dP × dPupperp) matrix; when a shorter lag p < upperp is selected, `psi[:, 1:dPp]` is automatically used.
  • pca_loadings=Matrix{, dQ, size(yields, 2)} stores the loadings for the first dQ principal components (so principal_components = yields * pca_loadings'), and you may optionally provide these loadings externally; if omitted, the package computes them internally via PCA.

Output(2)

Optimized hyperparameter, optimization result

  • Note that we minimize the negative log marginal likelihood, so the second output is for the minimization problem.
  • When optimizer=:LBFGS, the second output is a NamedTuple with fields minimizer, minimum, p, all_minimizer, all_minimum.
source
TermStructureModels.tuning_hyperparameter_with_vsMethod
tuning_hyperparameter_with_vs(yields, macros, tau_n, rho; populationsize=50, maxiter=10_000, medium_tau=collect(24:3:48), upper_q=[1 1; 1 1; 1 1; 4 4; 100 100], mean_kQ_infty=0, std_kQ_infty=0.1, upper_nu0=[], mean_phi_const=[], fix_const_PC1=false, upper_p=24, mean_phi_const_PC1=[], data_scale=1200, kappaQ_prior_pr=[], init_nu0=[], is_pure_EH=false, psi_const=[], pca_loadings=[], prior_mean_diff_kappaQ=[], prior_std_diff_kappaQ=[], optimizer=:LBFGS, ml_tol=1.0, init_x=[])

This function optimizes the hyperparameters with automatic variable selection: selects which macro variables affect latent factors (PCs).

Input

  • When comparing marginal likelihoods between models, the data for the dependent variable should be the same across models. To achieve this, we set the period of the dependent variable based on upper_p. For example, if upper_p = 3, yields[4:end,:] and macros[4:end,:] are the data for the dependent variable. yields[1:3,:] and macros[1:3,:] are used for setting initial observations for all lags.
  • optimizer: The optimization algorithm to use.
    • :LBFGS (default): Alternates between lag selection, forward stepwise variable selection for coefficients of macro variables on latent factors, and hyperparameter optimization. Variable selection stops when log marginal likelihood improvement ≤ 1.0.
    • :BBO: Uses BlackBoxOptim.jl to optimize lag, hyperparameters, and variable selection simultaneously.
  • ml_tol: Tolerance for parsimony in lag selection (only for :LBFGS). After finding the lag with the best marginal likelihood, the algorithm iteratively selects smaller lags if their marginal likelihood is within ml_tol of the best. This favors simpler models (smaller lags) when performance is comparable.
  • init_x: Initial values for hyperparameters and lag (only for :LBFGS). Should be a vector of length 12 in the format [vec(q); nu0-(dP+1); p]. If empty (default), uses [0.1, 0.1, 0.1, 2.0, 1.0, 0.1, 0.1, 0.1, 2.0, 1.0, 1.0, 1].
  • populationsize and maxiter are options for the optimizer.
    • populationsize: the number of candidate solutions in each generation (only for :BBO)
    • maxiter: the maximum number of iterations
  • The lower bounds for q and nu0 are 0 and dP+2.
  • The upper bounds for q, nu0, and VAR lag can be set by upper_q, upper_nu0, and upper_p.
    • The default option for upper_nu0 is the time-series length of the data.
  • If you use the default option for mean_phi_const,
    1. mean_phi_const[dQ+1:end] is a zero vector.
    2. mean_phi_const[1:dQ] is calibrated to make the prior mean of λₚ a zero vector.
    3. After step 2, mean_phi_const[1] is replaced with mean_phi_const_PC1 if it is not empty.
  • mean_phi_const = Matrix(your prior, dP, upper_p)
  • mean_phi_const[:,i] is the prior mean for the VAR(i) constant. Therefore, mean_phi_const is a matrix only in this function. In other functions, mean_phi_const is a vector for the orthogonalized VAR system with the selected lag.
  • When fix_const_PC1==true, the first element in the constant term in the orthogonalized VAR is fixed to its prior mean during posterior sampling.
  • data_scale::scalar: In a typical affine term structure model, theoretical yields are in decimals and not annualized. However, for convenience (public data usually contains annualized percentage yields) and numerical stability, we sometimes want to scale up yields and use (data_scale*theoretical yields) as the variable yields. In this case, you can use the data_scale option. For example, we can set data_scale = 1200 and use annualized percentage monthly yields as yields.
  • kappaQ_prior_pr is a vector of prior distributions for kappaQ under the JSZ model: each element specifies the prior for kappaQ[i] and must be provided as a Distributions.jl object. Alternatively, you can supply prior_mean_diff_kappaQ and prior_std_diff_kappaQ, which define means and standard deviations for Normal priors on [kappaQ[1]; diff(kappaQ)]; the implied Normal prior for each kappaQ[i] is then truncated to (0, 1). These options are only needed when using the JSZ model.
  • is_pure_EH::Bool: When mean_phi_const=[], is_pure_EH=false sets mean_phi_const to zero vectors. Otherwise, mean_phi_const is set to imply the pure expectation hypothesis under mean_phi_const=[].
  • psi_const is multiplied with the prior variance of the intercept coefficients in the orthogonalized transition equation.
  • pca_loadings=Matrix{, dQ, size(yields, 2)} stores the loadings for the first dQ principal components (so principal_components = yields * pca_loadings'), and you may optionally provide these loadings externally; if omitted, the package computes them internally via PCA.

Output(3)

Optimized hyperparameter, optimization result, psi matrix

  • The second output contains optimization results: when optimizer=:LBFGS, a NamedTuple with minimizer, minimum, p, all_minimizer, all_minimum, selected_vars, psi; when optimizer=:BBO, a NamedTuple with opt (bboptimize result), selected_vars, psi. selected_vars is a sorted list of (lag, variable) tuples indicating which columns are included beyond the always-included columns 1:dQ.
  • The third output is psi (dP × dPp), the prior variance scaling matrix for lagged regressors in the orthogonalized transition equation. Backward variable selection is applied to all columns except lag-1 latent factors (j ≤ dQ, k = 1): setting psi[1:dQ, col] = 0 excludes a variable's effect on latent factors. For lag k, variable j, the column index is (k-1)dP+j.
source

Internal Functions

TermStructureModels.AₚMethod
Aₚ(Aₓ_, Bₓ_, T0P_, Wₒ)

Input

  • Aₓ_, Bₓ_, and T0P_ are outputs of function Aₓ, Bₓ, and T0P, respectively.

Output

  • Aₚ
source
TermStructureModels.NIG_NIGMethod
NIG_NIG(y, X, β₀, B₀, α₀, δ₀)

Normal-InverseGamma-Normal-InverseGamma update

  • prior: β|σ² ~ MvNormal(β₀,σ²B₀), σ² ~ InverseGamma(α₀,δ₀)
  • likelihood: y|β,σ² = Xβ + MvNormal(zeros(T,1),σ²I(T))

Output(2)

β, σ²

  • posterior sample
source
TermStructureModels.PCs_2_latentsMethod
PCs_2_latents(yields, tau_n; kappaQ, kQ_infty, KPF, GPFF, OmegaFF, data_scale, pca_loadings=[])

Notation XF is for the latent factor space and notation F is for the PC state space.

Input

  • data_scale::scalar: In typical affine term structure models, theoretical yields are in decimal and not annualized. However, for convenience (public data usually contains annualized percentage yields) and numerical stability, we sometimes want to scale up yields, so want to use (data_scale*theoretical yields) as variable yields. In this case, you can use the data_scale option. For example, we can set data_scale = 1200 and use annualized percentage monthly yields as yields.
  • pca_loadings=Matrix{, dQ, size(yields, 2)} stores the loadings for the first dQ principal components (so principal_components = yields * pca_loadings'), and you may optionally provide these loadings externally; if omitted, the package computes them internally via PCA. 

Output(6)

latent, kappaQ, kQ_infty, KPXF, GPXFXF, OmegaXFXF

  • Latent factors contain initial observations.
source
TermStructureModels.T0PMethod
T0P(T1X_, Aₓ_, Wₚ, c)

Input

  • T1X_ and Aₓ_ are outputs of function T1X and Aₓ, respectively. c is a sample mean of PCs.

Output

  • T0P
source
TermStructureModels._termPremiumMethod
_termPremium(τ, PCs, macros, bτ_, T0P_, T1X_; kappaQ, kQ_infty, KPF, GPFF, ΩPP, data_scale)

This function calculates the term premium for maturity τ.

Input

  • data_scale::scalar = In typical affine term structure models, theoretical yields are in decimal and not annualized. However, for convenience (public data usually contains annualized percentage yields) and numerical stability, we sometimes want to scale up yields, so want to use (data_scale*theoretical yields) as variable yields. In this case, you can use the data_scale option. For example, we can set data_scale = 1200 and use annualized percentage monthly yields as yields.

Output(4)

TP, timevarying_TP, const_TP, jensen

  • TP: term premium of maturity τ
  • timevarying_TP: contributions of each [PCs macros] on TP at each time $t$ (row: time, col: variable)
  • const_TP: Constant part of TP
  • jensen: Jensen's Inequality part in TP
  • The output excludes the time period for the initial observations.
source
TermStructureModels.aτMethod
aτ(N, bτ_, tau_n, Wₚ; kQ_infty, ΩPP, data_scale)
aτ(N, bτ_; kQ_infty, ΩXX, data_scale)

This function has two methods (multiple dispatch).

Input

  • When Wₚ ∈ arguments: This function calculates using ΩPP.
  • Otherwise: This function calculates using ΩXX = OmegaXFXF[1:dQ, 1:dQ], so parameters are in the latent factor space and Wₚ is not needed.
  • bτ_ is an output of function .
  • data_scale::scalar: In typical affine term structure models, theoretical yields are in decimal and not annualized. However, for convenience (public data usually contains annualized percentage yields) and numerical stability, we sometimes want to scale up yields, so want to use (data_scale*theoretical yields) as variable yields. In this case, you can use the data_scale option. For example, we can set data_scale = 1200 and use annualized percentage monthly yields as yields.

Output

  • Vector(Float64)(aτ,N)
  • For the i-th maturity, Output[i] is the corresponding .
source
TermStructureModels.btauMethod
btau(N; kappaQ)

This function solves the difference equation for in the closed form expression, assuming the distinct eigenvalues under the JSZ model.

Output

  • For maturity i, btau[:, i] is a vector of factor loadings.
source
TermStructureModels.bτMethod
bτ(N; kappaQ, dQ)

This function solves the difference equation for .

Output

  • For maturity i, bτ[:, i] is a vector of factor loadings.
source
TermStructureModels.jensens_inequalityMethod
jensens_inequality(τ, bτ_, T1X_; ΩPP, data_scale)

This function evaluates the Jensen's Inequality term. All terms are invariant with respect to the data_scale, except for this Jensen's inequality term, so the term needs to be scaled down by data_scale.

Output

  • Jensen's Inequality term for of maturity τ.
source
TermStructureModels.loglik_NUTSMethod
loglik_NUTS(i, yields, PCs, tau_n, macros, dims_phi, p; phiQ, varFFQ, diff_kappaQ, kQ_infty, phi, varFF, SigmaO, data_scale, pca_loadings)

This function calculates the likelihood of the NUTS block.

source
TermStructureModels.loglik_mea2Method
loglik_mea2(yields, tau_n, p; kappaQ, kQ_infty, ΩPP, SigmaO, data_scale, pca_loadings)

This function is the same as loglik_mea but it requires ΩPP as an input.

source
TermStructureModels.loglik_mea_NUTSMethod
loglik_mea_NUTS(yields, tau_n; kappaQ, kQ_infty, phi, varFF, SigmaO, data_scale, pca_loadings)

This function generates the log likelihood of the measurement equation. It is used for posterior_NUTS.

Output

  • the measurement equation part of the log likelihood
source
TermStructureModels.logprior_phi0Method
logprior_phi0(phi0, mean_phi_const, rho::Vector, GQ_XX_mean, p, dQ, dP; psi_const, psi, q, nu0, Omega0, fix_const_PC1)

This is a companion function of prior_phi0. It calculates the log density of the prior distribution for phi0.

source
TermStructureModels.logprior_varFFMethod
logprior_varFF(varFF; nu0, Omega0::Vector)

This is a companion function of prior_varFF. It calculates the log density of the prior distribution for varFF.

source
TermStructureModels.longvarMethod
longvar(v)

This function calculates the long-run variance of v using the quadratic spectral window with bandwidth selection of Andrews (1991). The AR(1) approximation is used.

Input

  • Time-series vector v

Output

  • Estimated 2πh(0) of v, where h(x) is the spectral density of v at x.
source
TermStructureModels.minnesotaMethod
minnesota(l, i, j; q, nu0, Omega0, dQ=[])

This function returns the unscaled prior variance of the Minnesota prior.

Input

  • lag l, dependent variable i, regressor j in the VAR(p)
  • q[:,1] and q[:,2] are [own, inner cross, outer cross, lag, intercept] shrinkages for the first dQ and remaining dP-dQ equations, respectively. Here, when the dependent variable is a principal component, inner cross refers to the other principal components (excluding itself), whereas outer cross refers to the macroeconomic variables. Likewise, when the dependent variable is a macroeconomic variable, inner cross refers to the other macroeconomic variables (excluding itself), whereas outer cross refers to the principal components.
  • nu0(d.f.), Omega0(scale): Inverse-Wishart prior for the error-covariance matrix of VAR(p).

Output

  • Minnesota part in the prior variance
source
TermStructureModels.mle_error_covarianceMethod
mle_error_covariance(yields, macros, tau_n, p; pca_loadings=[])

This function calculates the MLE estimates of the error covariance matrix of the VAR(p) model.

  • pca_loadings=Matrix{, dQ, size(yields, 2)} stores the loadings for the first dQ principal components (so principal_components = yields * pca_loadings'), and you may optionally provide these loadings externally; if omitted, the package computes them internally via PCA. 
source
TermStructureModels.post_SigmaOMethod
post_SigmaO(yields, tau_n; kappaQ, kQ_infty, ΩPP, gamma, p, data_scale, pca_loadings)

Posterior sampler for the measurement errors

Output

  • Vector{Dist}(IG, N-dQ)
source
TermStructureModels.post_kQ_inftyMethod
post_kQ_infty(mean_kQ_infty, std_kQ_infty, yields, tau_n; kappaQ, phi, varFF, SigmaO, data_scale, pca_loadings)

Output

  • Full conditional posterior distribution
source
TermStructureModels.post_kappaQMethod
post_kappaQ(yields, prior_kappaQ_, tau_n; kQ_infty, phi, varFF, SigmaO, data_scale, pca_loadings)

Input

  • prior_kappaQ_ is an output of function prior_kappaQ.

Output

  • Full conditional posterior distribution
source
TermStructureModels.post_kappaQ2Method
post_kappaQ2(yields, prior_kappaQ_, tau_n; kappaQ, kQ_infty, phi, varFF, SigmaO, data_scale, x_mode, inv_x_hess, pca_loadings)

This function conducts the Metropolis-Hastings algorithm for the reparameterized kappaQ under the unrestricted JSZ form. x_mode and inv_x_hess constitute the mean and variance of the Normal proposal distribution.

  • Reparameterization: kappaQ[1] = x[1] kappaQ[2] = x[1] + x[2] kappaQ[3] = x[1] + x[2] + x[3]
  • Jacobian: [1 0 0 1 1 0 1 1 1]
  • The determinant = 1
source
TermStructureModels.post_kappaQ_phi_varFF_q_nu0Method
post_kappaQ_phi_varFF_q_nu0(yields, macros, tau_n, mean_phi_const, rho, prior_q, prior_nu0, prior_diff_kappaQ; phi, psi, psi_const, varFF, q, nu0, kappaQ, kQ_infty, SigmaO, fix_const_PC1, data_scale, pca_loadings, sampler, chain, is_warmup)

Full-conditional posterior sampler for kappaQ, phi and varFF

Input

  • prior_q: The 4 by 2 matrix that contains the prior distribution for q. All entries should be objects in Distributions.jl.
  • prior_nu0: The prior distribution for nu0 - (dP + 1). It should be an object in Distributions.jl.
  • prior_diff_kappaQ is a vector of the truncated normals(Distributions.truncated(Distributions.Normal(), lower, upper)). It has a prior for [kappaQ[1]; diff(kappaQ)].
  • When fix_const_PC1==true, the first element in a constant term in the orthogonalized VAR is fixed to its prior mean during the posterior sampling.
  • sampler and chain are the objects in Turing.jl.
  • If the current step is in the warmup phase, set is_warmup=true.

Output(6)

chain, q, nu0, kappaQ, phi, varFF

source
TermStructureModels.post_phi_varFFMethod
post_phi_varFF(yields, macros, mean_phi_const, rho, prior_kappaQ_, tau_n; phi, psi, psi_const, varFF, q, nu0, Omega0, kappaQ, kQ_infty, SigmaO, fix_const_PC1, data_scale, pca_loadings)

Full-conditional posterior sampler for phi and varFF

Input

  • prior_kappaQ_ is an output of function prior_kappaQ.
  • When fix_const_PC1==true, the first element in a constant term in the orthogonalized VAR is fixed to its prior mean during the posterior sampling.

Output(3)

phi, varFF, isaccept=Vector{Bool}(undef, dQ)

  • Returns a posterior sample.
source
TermStructureModels.prior_CMethod
prior_C(; Omega0::Vector)

This function translates the Inverse-Wishart prior to a series of the Normal-Inverse-Gamma (NIG) prior distributions. If the dimension is dₚ, there are dₚ NIG prior distributions. This function generates Normal priors.

Output:

  • unscaled prior of C in the LDLt decomposition, OmegaFF = inv(C)*diagm(varFF)*inv(C)'

Important note

prior variance for C[i,:] = varFF[i]*variance of output[i,:]

source
TermStructureModels.prior_gammaMethod
prior_gamma(yields, p; pca_loadings)

There is a hierarchical structure in the measurement equation. The prior means of the measurement errors are gamma[i] and each gamma[i] follows Gamma(1,gamma_bar) distribution. This function decides gamma_bar empirically. OLS is used to estimate the measurement equation and then a variance of residuals is calculated for each maturity. An inverse of the average residual variances is set to gamma_bar.

Output

  • hyperparameter gamma_bar
source
TermStructureModels.prior_phi0Method
prior_phi0(mean_phi_const, rho::Vector, prior_kappaQ_, tau_n, Wₚ; psi_const, psi, q, nu0, Omega0, fix_const_PC1)

This function derives the prior distribution for coefficients of the lagged regressors in the orthogonalized VAR.

Input

  • prior_kappaQ_ is an output of function prior_kappaQ.
  • When fix_const_PC1==true, the first element in a constant term in the orthogonalized VAR is fixed to its prior mean during the posterior sampling.

Output

  • Normal prior distributions on the slope coefficient of lagged variables and intercepts in the orthogonalized equation.
  • Output[:,1] for intercepts, Output[:,1+1:1+dP] for the first lag, Output[:,1+dP+1:1+2*dP] for the second lag, and so on.

Important note

prior variance for phi[i,:] = varFF[i]*var(output[i,:])

source
TermStructureModels.prior_varFFMethod
prior_varFF(; nu0, Omega0::Vector)

This function translates the Inverse-Wishart prior to a series of the Normal-Inverse-Gamma (NIG) prior distributions. If the dimension is dₚ, there are dₚ NIG prior distributions. This function generates Inverse-Gamma priors.

Output:

  • prior of varFF in the LDLt decomposition,OmegaFF = inv(C)*diagm(varFF)*inv(C)'
  • Each element in the output follows Inverse-Gamma priors.
source
TermStructureModels.yphi_XphiMethod
yphi_Xphi(PCs, macros, p)

This function generates the dependent variable and the corresponding regressors in the orthogonalized transition equation.

Output(4)

yphi, Xphi = [ones(T - p) Xphi_lag Xphi_contemporaneous], [ones(T - p) Xphi_lag], Xphi_contemporaneous

  • yphi and Xphi is a full matrix. For the i'th equation, the dependent variable is yphi[:,i] and the regressor is Xphi.
  • Xphi is the same for all orthogonalized transition equations. The orthogonalized equations are different in terms of contemporaneous regressors. Therefore, the corresponding regressors in Xphi should be excluded. The form of parameter phi performs that task by setting the coefficients of the excluded regressors to zeros. In particular, for the last dP by dP block in phi, the diagonals and the upper diagonal elements should be zero.
source