[BACK]Return to homogeneous-network.tex CVS log [TXT][DIR] Up to [local] / OpenXM / doc / ascm2001p

Diff for /OpenXM/doc/ascm2001p/homogeneous-network.tex between version 1.2 and 1.8

version 1.2, 2001/06/20 01:43:12 version 1.8, 2001/06/21 03:09:46
Line 1 
Line 1 
 % $OpenXM: OpenXM/doc/ascm2001p/homogeneous-network.tex,v 1.1 2001/06/19 07:32:58 noro Exp $  % $OpenXM: OpenXM/doc/ascm2001p/homogeneous-network.tex,v 1.7 2001/06/20 05:42:47 takayama Exp $
   
 \subsection{Distributed computation with homogeneous servers}  \subsection{Distributed computation with homogeneous servers}
 \label{section:homog}  \label{section:homog}
   
 One of the aims of OpenXM is a parallel speedup by a distributed computation  One of the aims of OpenXM is a parallel speedup by a distributed computation
 with homogeneous servers. As the current specification of OpenXM does  with homogeneous servers.  Let us see some examples.
 not include communication between servers, one cannot expect  %As the current specification of OpenXM does
 the maximal parallel speedup. However it is possible to execute  %not include communication between servers, one cannot expect
 several types of distributed computation as follows.  %the maximal parallel speedup. However it is possible to execute
   %several types of distributed computation as follows.
   
   \subsubsection{Competitive distributed computation by various strategies}
   
   SINGULAR \cite{Singular} implements MP interface for distributed
   computation and a competitive Gr\"obner basis computation is
   illustrated as an example of distributed computation by the interface.
   Such a distributed computation is also possible on OpenXM.
   
   \begin{verbatim}
   extern Proc1,Proc2$
   Proc1 = -1$ Proc2 = -1$
   /* G:set of polys; V:list of variables */
   /* Mod: the Ground field GF(Mod); O:type of order */
   def dgr(G,V,Mod,O)
   {
     /* invoke servers if necessary */
     if ( Proc1 == -1 ) Proc1 = ox_launch();
     if ( Proc2 == -1 ) Proc2 = ox_launch();
     P = [Proc1,Proc2];
     map(ox_reset,P); /* reset servers */
     /* P0 executes Buchberger algorithm over GF(Mod) */
     ox_cmo_rpc(P[0],"dp_gr_mod_main",G,V,0,Mod,O);
     /* P1 executes F4 algorithm over GF(Mod) */
     ox_cmo_rpc(P[1],"dp_f4_mod_main",G,V,Mod,O);
     map(ox_push_cmd,P,262); /* 262 = OX_popCMO */
     F = ox_select(P); /* wait for data */
     /* F[0] is a server's id which is ready */
     R = ox_get(F[0]);
     if ( F[0] == P[0] ) { Win = "Buchberger"; Lose = P[1]; }
     else { Win = "F4"; Lose = P[0]; }
     ox_reset(Lose); /* reset the loser */
     return [Win,R];
   }
   \end{verbatim}
   In the above Asir program, the client creates two servers and it requests
   Gr\"obner basis computations by the Buchberger algorithm
   and the $F_4$ algorithm to the servers for the same input.
   The client watches the streams by {\tt ox\_select()}
   and the result which is returned first is taken. Then the remaining
   server is reset.
   
 \subsubsection{Nesting of client-server communication}  \subsubsection{Nesting of client-server communication}
   
 Under OpenXM-RFC 100 an OpenXM server can be a client of other servers.  
 Figure \ref{tree} illustrates a tree-like structure of an OpenXM  
 client-server communication.  
 \begin{figure}  \begin{figure}
 \label{tree}  \label{tree}
 \begin{center}  \begin{center}
Line 36  client-server communication.
Line 74  client-server communication.
 \caption{Tree-like structure of client-server communication}  \caption{Tree-like structure of client-server communication}
 \end{center}  \end{center}
 \end{figure}  \end{figure}
   %%Prog:  load ("dfff"); df_demo();  enter 100.
   Under OpenXM-RFC 100 an OpenXM server can be a client of other servers.
   %Figure \ref{tree}
   Figure 2
   illustrates a tree-like structure of an OpenXM
   client-server communication.
 Such a computational model is useful for parallel implementation of  Such a computational model is useful for parallel implementation of
 algorithms whose task can be divided into subtasks recursively.  algorithms whose task can be divided into subtasks recursively.
   
Line 90  algorithms whose task can be divided into subtasks rec
Line 134  algorithms whose task can be divided into subtasks rec
 %  }  %  }
 %}  %}
 %\end{verbatim}  %\end{verbatim}
   %
 A typical example is a parallelization of the Cantor-Zassenhaus  A typical example is a parallelization of the Cantor-Zassenhaus
 algorithm for polynomial factorization over finite fields.  algorithm for polynomial factorization over finite fields,
 which is a recursive algorithm.  which is a recursive algorithm.
 At each level of the recursion, a given polynomial can be  At each level of the recursion, a given polynomial can be
 divided into two non-trivial factors with some probability by using  divided into two non-trivial factors with some probability by using
 a randomly generated polynomial as a {\it separator}.  a randomly generated polynomial as a {\it separator}.
 We can apply the following simple parallelization:  We can apply the following simple parallelization:
 When two non-trivial factors are generated on a server,  when two non-trivial factors are generated on a server,
 one is sent to another server and the other factor is factorized on the server  one is sent to another server and the other factor is factorized on the server
 itself.  itself.
 %\begin{verbatim}  %\begin{verbatim}
Line 110  itself. 
Line 154  itself. 
 %  if ( N == E ) return [F];  %  if ( N == E ) return [F];
 %  M = field_order_ff(); K = idiv(N,E); L = [F];  %  M = field_order_ff(); K = idiv(N,E); L = [F];
 %  while ( 1 ) {  %  while ( 1 ) {
 %    /* gererate a random polynomial */  %    /* generate a random polynomial */
 %    W = monic_randpoly_ff(2*E,V);  %    W = monic_randpoly_ff(2*E,V);
 %    /* compute a power of the random polynomial */  %    /* compute a power of the random polynomial */
 %    T = generic_pwrmod_ff(W,F,idiv(M^E-1,2));  %    T = generic_pwrmod_ff(W,F,idiv(M^E-1,2));
Line 209  work well on OpenXM.
Line 253  work well on OpenXM.
 %Such a distributed computation is also possible on OpenXM as follows:  %Such a distributed computation is also possible on OpenXM as follows:
 %  %
 %The client creates two servers and it requests  %The client creates two servers and it requests
 %Gr\"obner basis comutations from the homogenized input and the input itself  %Gr\"obner basis computations from the homogenized input and the input itself
 %to the servers.  %to the servers.
 %The client watches the streams by {\tt ox\_select()}  %The client watches the streams by {\tt ox\_select()}
 %and the result which is returned first is taken. Then the remaining  %and the result which is returned first is taken. Then the remaining

Legend:
Removed from v.1.2  
changed lines
  Added in v.1.8

FreeBSD-CVSweb <freebsd-cvsweb@FreeBSD.org>