[BACK]Return to homogeneous-network.tex CVS log [TXT][DIR] Up to [local] / OpenXM / doc / ascm2001p

Diff for /OpenXM/doc/ascm2001p/homogeneous-network.tex between version 1.2 and 1.6

version 1.2, 2001/06/20 01:43:12 version 1.6, 2001/06/20 03:18:21
Line 1 
Line 1 
 % $OpenXM: OpenXM/doc/ascm2001p/homogeneous-network.tex,v 1.1 2001/06/19 07:32:58 noro Exp $  % $OpenXM: OpenXM/doc/ascm2001p/homogeneous-network.tex,v 1.5 2001/06/20 03:08:05 takayama Exp $
   
 \subsection{Distributed computation with homogeneous servers}  \subsection{Distributed computation with homogeneous servers}
 \label{section:homog}  \label{section:homog}
   
 One of the aims of OpenXM is a parallel speedup by a distributed computation  One of the aims of OpenXM is a parallel speedup by a distributed computation
 with homogeneous servers. As the current specification of OpenXM does  with homogeneous servers.
 not include communication between servers, one cannot expect  %As the current specification of OpenXM does
 the maximal parallel speedup. However it is possible to execute  %not include communication between servers, one cannot expect
 several types of distributed computation as follows.  %the maximal parallel speedup. However it is possible to execute
   %several types of distributed computation as follows.
   
   \subsubsection{Competitive distributed computation by various strategies}
   
   SINGULAR \cite{Singular} implements {\it MP} interface for distributed
   computation and a competitive Gr\"obner basis computation is
   illustrated as an example of distributed computation.
   Such a distributed computation is also possible on OpenXM.
   
   \begin{verbatim}
   extern Proc1,Proc2$ Proc1 = -1$ Proc2 = -1$
   /* G:set of polys; V:list of variables */
   /* Mod: the Ground field GF(Mod); O:type of order */
   def dgr(G,V,Mod,O)
   {
     /* invoke servers if necessary */
     if ( Proc1 == -1 ) Proc1 = ox_launch();
     if ( Proc2 == -1 ) Proc2 = ox_launch();
     P = [Proc1,Proc2];
     map(ox_reset,P); /* reset servers */
     /* P0 executes Buchberger algorithm over GF(Mod) */
     ox_cmo_rpc(P[0],"dp_gr_mod_main",G,V,0,Mod,O);
     /* P1 executes F4 algorithm over GF(Mod) */
     ox_cmo_rpc(P[1],"dp_f4_mod_main",G,V,Mod,O);
     map(ox_push_cmd,P,262); /* 262 = OX_popCMO */
     F = ox_select(P); /* wait for data */
     /* F[0] is a server's id which is ready */
     R = ox_get(F[0]);
     if ( F[0] == P[0] ) { Win = "Buchberger"; Lose = P[1]; }
     else { Win = "F4"; Lose = P[0]; }
     ox_reset(Lose); /* reset the loser */
     return [Win,R];
   }
   \end{verbatim}
   In the above Asir program, the client creates two servers and it requests
   Gr\"obner basis comutations by the Buchberger algorithm the $F_4$ algorithm
   to the servers for the same input.
   The client watches the streams by {\tt ox\_select()}
   and the result which is returned first is taken. Then the remaining
   server is reset.
   
 \subsubsection{Nesting of client-server communication}  \subsubsection{Nesting of client-server communication}
   
 Under OpenXM-RFC 100 an OpenXM server can be a client of other servers.  Under OpenXM-RFC 100 an OpenXM server can be a client of other servers.
Line 90  algorithms whose task can be divided into subtasks rec
Line 130  algorithms whose task can be divided into subtasks rec
 %  }  %  }
 %}  %}
 %\end{verbatim}  %\end{verbatim}
   %
 A typical example is a parallelization of the Cantor-Zassenhaus  A typical example is a parallelization of the Cantor-Zassenhaus
 algorithm for polynomial factorization over finite fields.  algorithm for polynomial factorization over finite fields.
 which is a recursive algorithm.  which is a recursive algorithm.

Legend:
Removed from v.1.2  
changed lines
  Added in v.1.6

FreeBSD-CVSweb <freebsd-cvsweb@FreeBSD.org>