[BACK]Return to homogeneous-network.tex CVS log [TXT][DIR] Up to [local] / OpenXM / doc / issac2000

Diff for /OpenXM/doc/issac2000/homogeneous-network.tex between version 1.8 and 1.13

version 1.8, 2000/01/16 03:15:49 version 1.13, 2000/01/17 08:50:56
Line 1 
Line 1 
 % $OpenXM: OpenXM/doc/issac2000/homogeneous-network.tex,v 1.7 2000/01/15 06:11:17 takayama Exp $  % $OpenXM: OpenXM/doc/issac2000/homogeneous-network.tex,v 1.12 2000/01/17 08:06:15 noro Exp $
   
 \subsection{Distributed computation with homogeneous servers}  \subsection{Distributed computation with homogeneous servers}
 \label{section:homog}  \label{section:homog}
Line 34  Figure \ref{speedup}
Line 34  Figure \ref{speedup}
 shows the speedup factor under the above distributed computation  shows the speedup factor under the above distributed computation
 on Risa/Asir. For each $n$, two polynomials of degree $n$  on Risa/Asir. For each $n$, two polynomials of degree $n$
 with 3000bit coefficients are generated and the product is computed.  with 3000bit coefficients are generated and the product is computed.
 The machine is Fujitsu AP3000,  The machine is FUJITSU AP3000,
 a cluster of Sun connected with a high speed network and MPI over the  a cluster of Sun workstations connected with a high speed network
 network is used to implement OpenXM.  and MPI over the network is used to implement OpenXM.
 \begin{figure}[htbp]  \begin{figure}[htbp]
 \epsfxsize=8.5cm  \epsfxsize=8.5cm
 \epsffile{speedup.ps}  \epsffile{speedup.ps}
Line 53  the speedup factor depends on the ratio of 
Line 53  the speedup factor depends on the ratio of 
 the computational cost and the communication cost for each unit operation.  the computational cost and the communication cost for each unit operation.
 Figure \ref{speedup} shows that  Figure \ref{speedup} shows that
 the speedup is satisfactory if the degree is large and $L$  the speedup is satisfactory if the degree is large and $L$
 is not large, say, up to 10 under the above envionment.  is not large, say, up to 10 under the above environment.
 If OpenXM provides the broadcast and the reduce operations, the cost of  If OpenXM provides operations for the broadcast and the reduction
 sending $f_1$, $f_2$ and gathering $F_j$ may be reduced to $O(log_2L)$  such as {\tt MPI\_Bcast} and {\tt MPI\_Reduce} respectively, the cost of
   sending $f_1$, $f_2$ and gathering $F_j$ may be reduced to $O(\log_2L)$
 and we can expect better results in such a case.  and we can expect better results in such a case.
   
 \subsubsection{Competitive distributed computation by various strategies}  \subsubsection{Competitive distributed computation by various strategies}
   
 Singular \cite{Singular} implements {\tt MP} interface for distributed  SINGULAR \cite{Singular} implements {\it MP} interface for distributed
 computation and a competitive Gr\"obner basis computation is  computation and a competitive Gr\"obner basis computation is
 illustrated as an example of distributed computation.  illustrated as an example of distributed computation.
 Such a distributed computation is also possible on OpenXM.  Such a distributed computation is also possible on OpenXM.
 The following Risa/Asir function computes a Gr\"obner basis by  The following Risa/Asir function computes a Gr\"obner basis by
 starting the computations simultaneously from the homogenized input and  starting the computations simultaneously from the homogenized input and
 the input itself.  The client watches the streams by {\tt ox\_select()}  the input itself.  The client watches the streams by {\tt ox\_select()}
 and The result which is returned first is taken. Then the remaining  and the result which is returned first is taken. Then the remaining
 server is reset.  server is reset.
   
 \begin{verbatim}  \begin{verbatim}

Legend:
Removed from v.1.8  
changed lines
  Added in v.1.13

FreeBSD-CVSweb <freebsd-cvsweb@FreeBSD.org>