[BACK]Return to hgm.cwishart.Rd CVS log [TXT][DIR] Up to [local] / OpenXM / src / R / r-packages / hgm / man

Annotation of OpenXM/src/R/r-packages/hgm/man/hgm.cwishart.Rd, Revision 1.8

1.8     ! takayama    1: % $OpenXM: OpenXM/src/R/r-packages/hgm/man/hgm.cwishart.Rd,v 1.7 2015/03/21 22:49:34 takayama Exp $
1.4       takayama    2: \name{hgm.pwishart}
                      3: \alias{hgm.pwishart}
1.1       takayama    4: %- Also NEED an '\alias' for EACH other topic documented here.
                      5: \title{
1.4       takayama    6:     The function hgm.pwishart evaluates the cumulative distribution function
1.7       takayama    7:   of random wishart matrices.
1.1       takayama    8: }
                      9: \description{
1.4       takayama   10:     The function hgm.pwishart evaluates the cumulative distribution function
1.7       takayama   11:   of random wishart matrices of size m times m.
1.1       takayama   12: }
                     13: \usage{
1.8     ! takayama   14: hgm.pwishart(m,n,beta,q0,approxdeg,h,dp,q,mode,method,err,automatic,assigned_series_error,verbose)
1.1       takayama   15: }
                     16: %- maybe also 'usage' for other objects documented here.
                     17: \arguments{
1.2       takayama   18:   \item{m}{The dimension of the Wishart matrix.}
                     19:   \item{n}{The degree of freedome (a parameter of the Wishart distribution)}
1.3       takayama   20:   \item{beta}{The eigenvalues of the inverse of the covariant matrix /2
                     21: (a parameter of the Wishart distribution).
                     22:     The beta is equal to inverse(sigma)/2.
1.1       takayama   23:   }
1.4       takayama   24:   \item{q0}{The point to evaluate the matrix hypergeometric series. q0>0}
1.1       takayama   25:   \item{approxdeg}{
1.2       takayama   26:     Zonal polynomials upto the approxdeg are calculated to evaluate
                     27:    values near the origin. A zonal polynomial is determined by a given
                     28:    partition (k1,...,km). We call the sum k1+...+km the degree.
1.1       takayama   29:   }
                     30:   \item{h}{
1.2       takayama   31:    A (small) step size for the Runge-Kutta method. h>0.
1.1       takayama   32:   }
                     33:   \item{dp}{
1.2       takayama   34:     Sampling interval of solutions by the Runge-Kutta method.
                     35:   }
1.4       takayama   36:   \item{q}{
                     37:     The second value y[0] of this function is the Prob(L1 < q)
1.2       takayama   38:     where L1 is the first eigenvalue of the Wishart matrix.
1.1       takayama   39:   }
1.3       takayama   40:   \item{mode}{
                     41:     When mode=c(1,0,0), it returns the evaluation
                     42:     of the matrix hypergeometric series and its derivatives at x0.
                     43:     When mode=c(1,1,(m^2+1)*p), intermediate values of P(L1 < x) with respect to
                     44:     p-steps of x are also returned.  Sampling interval is controled by dp.
                     45:   }
                     46:   \item{method}{
1.5       takayama   47:     a-rk4 is the default value.
1.3       takayama   48:     When method="a-rk4", the adaptive Runge-Kutta method is used.
                     49:     Steps are automatically adjusted by err.
                     50:   }
                     51:   \item{err}{
                     52:     When err=c(e1,e2), e1 is the absolute error and e2 is the relative error.
                     53:     As long as NaN is not returned, it is recommended to set to
                     54:     err=c(0.0, 1e-10), because initial values are usually very small.
                     55:   }
1.5       takayama   56:   \item{automatic}{
                     57:     automatic=1 is the default value.
                     58:     If it is 1, the degree of the series approximation will be increased until
                     59:     |(F(i)-F(i-1))/F(i-1)| < assigned_series_error where
                     60:     F(i) is the degree i approximation of the hypergeometric series
                     61:     with matrix argument.
1.6       takayama   62:     Step sizes for the Runge-Kutta method are also set automatically from
                     63:     the assigned_series_error if it is 1.
1.5       takayama   64:   }
                     65:   \item{assigned_series_error}{
                     66:     assigned_series_error=0.00001 is the default value.
                     67:   }
                     68:   \item{verbose}{
                     69:     verbose=0 is the default value.
                     70:     If it is 1, then steps of automatic degree updates and several parameters
                     71:     are output to stdout and stderr.
                     72:   }
1.1       takayama   73: }
                     74: \details{
1.2       takayama   75:   It is evaluated by the Koev-Edelman algorithm when x is near the origin and
                     76:   by the HGM when x is far from the origin.
1.3       takayama   77:   We can obtain more accurate result when the variables h is smaller,
                     78:   x0 is relevant value (not very big, not very small),
1.2       takayama   79:   and the approxdeg is more larger.
1.3       takayama   80:   A heuristic method to set parameters x0, h, approxdeg properly
                     81:   is to make x larger and to check if the y[0] approaches to 1.
1.1       takayama   82: %  \code{\link[RCurl]{postForm}}.
                     83: }
                     84: \value{
1.3       takayama   85: The output is x, y[0], ..., y[2^m] in the default mode,
1.2       takayama   86: y[0] is the value of the cumulative distribution
                     87: function P(L1 < x) at x.  y[1],...,y[2^m] are some derivatives.
                     88: See the reference below.
1.1       takayama   89: }
                     90: \references{
1.2       takayama   91: H.Hashiguchi, Y.Numata, N.Takayama, A.Takemura,
1.6       takayama   92: Holonomic gradient method for the distribution function of the largest root of a Wishart matrix,
                     93: Journal of Multivariate Analysis, 117, (2013) 296-312,
                     94: \url{http://dx.doi.org/10.1016/j.jmva.2013.03.011},
1.1       takayama   95: }
                     96: \author{
                     97: Nobuki Takayama
                     98: }
                     99: \note{
1.7       takayama  100: This function does not work well under the following cases:
                    101: 1. The beta (the set of eigenvalues)
                    102: is degenerated or is almost degenerated.
                    103: 2. The beta is very skew, in other words, there is a big eigenvalue
                    104: and there is also a small eigenvalue.
                    105: The error control is done by a heuristic method.
                    106: The obtained value is not validated automatically.
1.1       takayama  107: }
                    108:
                    109: %% ~Make other sections like Warning with \section{Warning }{....} ~
                    110:
                    111: \seealso{
                    112: %%\code{\link{oxm.matrix_r2tfb}}
                    113: }
                    114: \examples{
                    115: ## =====================================================
1.3       takayama  116: ## Example 1.
1.1       takayama  117: ## =====================================================
1.4       takayama  118: hgm.pwishart(m=3,n=5,beta=c(1,2,3),q=10)
1.3       takayama  119: ## =====================================================
                    120: ## Example 2.
                    121: ## =====================================================
1.4       takayama  122: b<-hgm.pwishart(m=4,n=10,beta=c(1,2,3,4),q0=1,q=10,approxdeg=20,mode=c(1,1,(16+1)*100));
1.3       takayama  123: c<-matrix(b,ncol=16+1,byrow=1);
                    124: #plot(c)
1.1       takayama  125: }
                    126: % Add one or more standard keywords, see file 'KEYWORDS' in the
                    127: % R documentation directory.
                    128: \keyword{ Cumulative distribution function of random wishart matrix }
                    129: \keyword{ Holonomic gradient method }
                    130: \keyword{ HGM }
                    131:

FreeBSD-CVSweb <freebsd-cvsweb@FreeBSD.org>