[BACK]Return to homogeneous-network.tex CVS log [TXT][DIR] Up to [local] / OpenXM / doc / ascm2001

Diff for /OpenXM/doc/ascm2001/homogeneous-network.tex between version 1.3 and 1.4

version 1.3, 2001/03/07 08:12:56 version 1.4, 2001/03/08 06:19:21
Line 1 
Line 1 
 % $OpenXM: OpenXM/doc/ascm2001/homogeneous-network.tex,v 1.2 2001/03/07 07:17:02 noro Exp $  % $OpenXM: OpenXM/doc/ascm2001/homogeneous-network.tex,v 1.3 2001/03/07 08:12:56 noro Exp $
   
 \subsection{Distributed computation with homogeneous servers}  \subsection{Distributed computation with homogeneous servers}
 \label{section:homog}  \label{section:homog}
Line 59  such as {\tt MPI\_Bcast} and {\tt MPI\_Reduce} respect
Line 59  such as {\tt MPI\_Bcast} and {\tt MPI\_Reduce} respect
 sending $f_1$, $f_2$ and gathering $F_j$ may be reduced to $O(\log_2L)$  sending $f_1$, $f_2$ and gathering $F_j$ may be reduced to $O(\log_2L)$
 and we can expect better results in such a case. In order to implement  and we can expect better results in such a case. In order to implement
 such operations we need new specifications for inter-sever communication  such operations we need new specifications for inter-sever communication
 and the session management. The will be proposed as OpenXM-RFC-102 in future.  and the session management, which will be proposed as OpenXM-RFC 102.
 We note that preliminary experiments shows the collective operations  We note that preliminary experiments show the collective operations
 works well on OpenXM.  work well on OpenXM.
   
 \subsubsection{Competitive distributed computation by various strategies}  \subsubsection{Competitive distributed computation by various strategies}
   
Line 102  def dgr(G,V,O,P0,P1)
Line 102  def dgr(G,V,O,P0,P1)
   
 \subsubsection{Nesting of client-server communication}  \subsubsection{Nesting of client-server communication}
   
 Under OpenXM-RFC-100 an OpenXM server can be a client of other servers.  Under OpenXM-RFC 100 an OpenXM server can be a client of other servers.
 Figure \ref{tree} illustrates a tree-like structure of an OpenXM  Figure \ref{tree} illustrates a tree-like structure of an OpenXM
 client-server communication.  client-server communication.
 \begin{figure}  \begin{figure}
Line 132  algorithms whose task can be divided into subtasks rec
Line 132  algorithms whose task can be divided into subtasks rec
 typical example is {\it quicksort}, where an array to be sorted is  typical example is {\it quicksort}, where an array to be sorted is
 partitioned into two sub-arrays and the algorithm is applied to each  partitioned into two sub-arrays and the algorithm is applied to each
 sub-array. In each level of recursion, two subtasks are generated  sub-array. In each level of recursion, two subtasks are generated
 and one can ask other OpenXM servers to execute them. Though  and one can ask other OpenXM servers to execute them.
 this makes little contribution to the efficiency, it is worth  Though it makes little contribution to the efficiency in the case of
 to show that such an attempt is very easy under OpenXM.  quicksort, we present an Asir program of this distributed quicksort
 Here is an Asir program.  to demonstrate that OpenXM gives an easy way to test this algorithm.
 A predefined constant {\tt LevelMax} determines  In the program, a predefined constant {\tt LevelMax} determines
 whether new servers are launched or whole subtasks are done on the server.  whether new servers are launched or whole subtasks are done on the server.
   
 \begin{verbatim}  \begin{verbatim}
Line 182  def quickSort(A,P,Q,Level) {
Line 182  def quickSort(A,P,Q,Level) {
 \end{verbatim}  \end{verbatim}
   
 Another example is a parallelization of the Cantor-Zassenhaus  Another example is a parallelization of the Cantor-Zassenhaus
 algorithm for polynomial factorization over finite fields. Its  algorithm for polynomial factorization over finite fields.
 fundamental structure is similar to that of quicksort. By choosing a  It is a recursive algorithm similar to quicksort.
 random polynomial, a polynomial is divided into two sub-factors with  At each level of the recursion, a given polynomial can be
 some probability. Then each subfactor is factorized recursively.  In  divided into two non-trivial factors with some probability by using
 the following program, one of the two sub-factors generated on a server  a randomly generated polynomial as a {\it separator}.
 is sent to another server and the other subfactor is factorized on the server  In the following program, one of the two factors generated on a server
   is sent to another server and the other factor is factorized on the server
 itself.  itself.
 \begin{verbatim}  \begin{verbatim}
 /* factorization of F */  /* factorization of F */
Line 198  def c_z(F,E,Level)
Line 199  def c_z(F,E,Level)
   if ( N == E ) return [F];    if ( N == E ) return [F];
   M = field_order_ff(); K = idiv(N,E); L = [F];    M = field_order_ff(); K = idiv(N,E); L = [F];
   while ( 1 ) {    while ( 1 ) {
       /* gererate a random polynomial */
     W = monic_randpoly_ff(2*E,V);      W = monic_randpoly_ff(2*E,V);
       /* compute a power of the random polynomial */
     T = generic_pwrmod_ff(W,F,idiv(M^E-1,2));      T = generic_pwrmod_ff(W,F,idiv(M^E-1,2));
     if ( !(W = T-1) ) continue;      if ( !(W = T-1) ) continue;
       /* G = GCD(F,W^((M^E-1)/2)) mod F) */
     G = ugcd(F,W);      G = ugcd(F,W);
     if ( deg(G,V) && deg(G,V) < N ) {      if ( deg(G,V) && deg(G,V) < N ) {
         /* G is a non-trivial factor of F */
       if ( Level >= LevelMax ) {        if ( Level >= LevelMax ) {
         /* everything is done on this server */          /* everything is done on this server */
         L1 = c_z(G,E,Level+1);          L1 = c_z(G,E,Level+1);

Legend:
Removed from v.1.3  
changed lines
  Added in v.1.4

FreeBSD-CVSweb <freebsd-cvsweb@FreeBSD.org>