Annotation of OpenXM/doc/ascm2001p/homogeneous-network.tex, Revision 1.2
1.2 ! noro 1: % $OpenXM: OpenXM/doc/ascm2001p/homogeneous-network.tex,v 1.1 2001/06/19 07:32:58 noro Exp $
1.1 noro 2:
3: \subsection{Distributed computation with homogeneous servers}
4: \label{section:homog}
5:
6: One of the aims of OpenXM is a parallel speedup by a distributed computation
7: with homogeneous servers. As the current specification of OpenXM does
8: not include communication between servers, one cannot expect
9: the maximal parallel speedup. However it is possible to execute
10: several types of distributed computation as follows.
11:
12: \subsubsection{Nesting of client-server communication}
13:
14: Under OpenXM-RFC 100 an OpenXM server can be a client of other servers.
15: Figure \ref{tree} illustrates a tree-like structure of an OpenXM
16: client-server communication.
17: \begin{figure}
18: \label{tree}
19: \begin{center}
1.2 ! noro 20: \begin{picture}(200,70)(0,0)
! 21: \put(70,70){\framebox(40,15){client}}
! 22: \put(20,30){\framebox(40,15){server}}
! 23: \put(70,30){\framebox(40,15){server}}
! 24: \put(120,30){\framebox(40,15){server}}
1.1 noro 25: \put(0,0){\framebox(40,15){server}}
26: \put(50,0){\framebox(40,15){server}}
1.2 ! noro 27: \put(150,0){\framebox(40,15){server}}
1.1 noro 28:
1.2 ! noro 29: \put(90,70){\vector(-2,-1){43}}
! 30: \put(90,70){\vector(0,-1){21}}
! 31: \put(90,70){\vector(2,-1){43}}
! 32: \put(40,30){\vector(-2,-1){22}}
! 33: \put(40,30){\vector(2,-1){22}}
! 34: \put(140,30){\vector(2,-1){22}}
1.1 noro 35: \end{picture}
36: \caption{Tree-like structure of client-server communication}
37: \end{center}
38: \end{figure}
39: Such a computational model is useful for parallel implementation of
40: algorithms whose task can be divided into subtasks recursively.
41:
42: %A typical example is {\it quicksort}, where an array to be sorted is
43: %partitioned into two sub-arrays and the algorithm is applied to each
44: %sub-array. In each level of recursion, two subtasks are generated
45: %and one can ask other OpenXM servers to execute them.
46: %Though it makes little contribution to the efficiency in the case of
47: %quicksort, we present an Asir program of this distributed quicksort
48: %to demonstrate that OpenXM gives an easy way to test this algorithm.
49: %In the program, a predefined constant {\tt LevelMax} determines
50: %whether new servers are launched or whole subtasks are done on the server.
51: %
52: %\begin{verbatim}
53: %#define LevelMax 2
54: %extern Proc1, Proc2;
55: %Proc1 = -1$ Proc2 = -1$
56: %
57: %/* sort [A[P],...,A[Q]] by quicksort */
58: %def quickSort(A,P,Q,Level) {
59: % if (Q-P < 1) return A;
60: % Mp = idiv(P+Q,2); M = A[Mp]; B = P; E = Q;
61: % while (1) {
62: % while (A[B] < M) B++;
63: % while (A[E] > M && B <= E) E--;
64: % if (B >= E) break;
65: % else { T = A[B]; A[B] = A[E]; A[E] = T; E--; }
66: % }
67: % if (E < P) E = P;
68: % if (Level < LevelMax) {
69: % /* launch new servers if necessary */
70: % if (Proc1 == -1) Proc1 = ox_launch(0);
71: % if (Proc2 == -1) Proc2 = ox_launch(0);
72: % /* send the requests to the servers */
73: % ox_rpc(Proc1,"quickSort",A,P,E,Level+1);
74: % ox_rpc(Proc2,"quickSort",A,E+1,Q,Level+1);
75: % if (E-P < Q-E) {
76: % A1 = ox_pop_local(Proc1);
77: % A2 = ox_pop_local(Proc2);
78: % }else{
79: % A2 = ox_pop_local(Proc2);
80: % A1 = ox_pop_local(Proc1);
81: % }
82: % for (I=P; I<=E; I++) A[I] = A1[I];
83: % for (I=E+1; I<=Q; I++) A[I] = A2[I];
84: % return(A);
85: % }else{
86: % /* everything is done on this server */
87: % quickSort(A,P,E,Level+1);
88: % quickSort(A,E+1,Q,Level+1);
89: % return(A);
90: % }
91: %}
92: %\end{verbatim}
93:
94: A typical example is a parallelization of the Cantor-Zassenhaus
95: algorithm for polynomial factorization over finite fields.
96: which is a recursive algorithm.
97: At each level of the recursion, a given polynomial can be
98: divided into two non-trivial factors with some probability by using
99: a randomly generated polynomial as a {\it separator}.
100: We can apply the following simple parallelization:
101: When two non-trivial factors are generated on a server,
102: one is sent to another server and the other factor is factorized on the server
103: itself.
104: %\begin{verbatim}
105: %/* factorization of F */
106: %/* E = degree of irreducible factors in F */
107: %def c_z(F,E,Level)
108: %{
109: % V = var(F); N = deg(F,V);
110: % if ( N == E ) return [F];
111: % M = field_order_ff(); K = idiv(N,E); L = [F];
112: % while ( 1 ) {
113: % /* gererate a random polynomial */
114: % W = monic_randpoly_ff(2*E,V);
115: % /* compute a power of the random polynomial */
116: % T = generic_pwrmod_ff(W,F,idiv(M^E-1,2));
117: % if ( !(W = T-1) ) continue;
118: % /* G = GCD(F,W^((M^E-1)/2)) mod F) */
119: % G = ugcd(F,W);
120: % if ( deg(G,V) && deg(G,V) < N ) {
121: % /* G is a non-trivial factor of F */
122: % if ( Level >= LevelMax ) {
123: % /* everything is done on this server */
124: % L1 = c_z(G,E,Level+1);
125: % L2 = c_z(sdiv(F,G),E,Level+1);
126: % } else {
127: % /* launch a server if necessary */
128: % if ( Proc1 < 0 ) Proc1 = ox_launch();
129: % /* send a request with Level = Level+1 */
130: % /* ox_c_z is a wrapper of c_z on the server */
131: % ox_cmo_rpc(Proc1,"ox_c_z",lmptop(G),E,
132: % setmod_ff(),Level+1);
133: % /* the rest is done on this server */
134: % L2 = c_z(sdiv(F,G),E,Level+1);
135: % L1 = map(simp_ff,ox_pop_cmo(Proc1));
136: % }
137: % return append(L1,L2);
138: % }
139: % }
140: %}
141: %\end{verbatim}
142: %
143: %
144: %
145: %
146: %
147: %
148: %
1.2 ! noro 149:
! 150: \subsubsection{Product of univariate polynomials}
! 151:
! 152: Shoup \cite{Shoup} showed that the product of univariate polynomials
! 153: with large degrees and large coefficients can be computed efficiently
! 154: by FFT over small finite fields and Chinese remainder theorem.
! 155: It can be easily parallelized:
! 156:
! 157: \begin{tabbing}
! 158: Input :\= $f_1, f_2 \in {\bf Z}[x]$ such that $deg(f_1), deg(f_2) < 2^M$\\
! 159: Output : $f = f_1f_2$ \\
! 160: $P \leftarrow$ \= $\{m_1,\cdots,m_N\}$ where $m_i$ is an odd prime, \\
! 161: \> $2^{M+1}|m_i-1$ and $m=\prod m_i $ is sufficiently large. \\
! 162: Separate $P$ into disjoint subsets $P_1, \cdots, P_L$.\\
! 163: for \= $j=1$ to $L$ $M_j \leftarrow \prod_{m_i\in P_j} m_i$\\
! 164: Compute $F_j$ such that $F_j \equiv f_1f_2 \bmod M_j$\\
! 165: \> and $F_j \equiv 0 \bmod m/M_j$ in parallel.\\
! 166: \> (The product is computed by FFT.)\\
! 167: return $\phi_m(\sum F_j)$\\
! 168: (For $a \in {\bf Z}$, $\phi_m(a) \in (-m/2,m/2)$ and $\phi_m(a)\equiv a \bmod m$)
! 169: \end{tabbing}
! 170:
! 171: Figure \ref{speedup}
! 172: shows the speedup factor under the above distributed computation
! 173: on Risa/Asir. For each $n$, two polynomials of degree $n$
! 174: with 3000bit coefficients are generated and the product is computed.
! 175: The machine is FUJITSU AP3000,
! 176: a cluster of Sun workstations connected with a high speed network
! 177: and MPI over the network is used to implement OpenXM.
! 178: \begin{figure}[htbp]
! 179: \epsfxsize=8.5cm
! 180: \epsffile{speedup.ps}
! 181: \caption{Speedup factor}
! 182: \label{speedup}
! 183: \end{figure}
! 184:
! 185: If the number of servers is $L$ and the inputs are fixed, then the cost to
! 186: compute $F_j$ in parallel is $O(1/L)$, whereas the cost
! 187: to send and receive polynomials is $O(L)$ if {\tt ox\_push\_cmo()} and
! 188: {\tt ox\_pop\_cmo()} are repeatedly applied on the client.
! 189: Therefore the speedup is limited and the upper bound of
! 190: the speedup factor depends on the ratio of
! 191: the computational cost and the communication cost for each unit operation.
! 192: Figure \ref{speedup} shows that
! 193: the speedup is satisfactory if the degree is large and $L$
! 194: is not large, say, up to 10 under the above environment.
! 195: If OpenXM provides collective operations for broadcast and reduction
! 196: such as {\tt MPI\_Bcast} and {\tt MPI\_Reduce} respectively, the cost of
! 197: sending $f_1$, $f_2$ and gathering $F_j$ may be reduced to $O(\log_2L)$
! 198: and we can expect better results in such a case. In order to implement
! 199: such operations we need new specifications for inter-sever communication
! 200: and the session management, which will be proposed as OpenXM-RFC 102.
! 201: We note that preliminary experiments show the collective operations
! 202: work well on OpenXM.
! 203:
! 204: %\subsubsection{Competitive distributed computation by various strategies}
! 205: %
! 206: %SINGULAR \cite{Singular} implements {\it MP} interface for distributed
! 207: %computation and a competitive Gr\"obner basis computation is
! 208: %illustrated as an example of distributed computation.
! 209: %Such a distributed computation is also possible on OpenXM as follows:
! 210: %
! 211: %The client creates two servers and it requests
! 212: %Gr\"obner basis comutations from the homogenized input and the input itself
! 213: %to the servers.
! 214: %The client watches the streams by {\tt ox\_select()}
! 215: %and the result which is returned first is taken. Then the remaining
! 216: %server is reset.
! 217: %
! 218: %\begin{verbatim}
! 219: %/* G:set of polys; V:list of variables */
! 220: %/* O:type of order; P0,P1: id's of servers */
! 221: %def dgr(G,V,O,P0,P1)
! 222: %{
! 223: % P = [P0,P1]; /* server list */
! 224: % map(ox_reset,P); /* reset servers */
! 225: % /* P0 executes non-homogenized computation */
! 226: % ox_cmo_rpc(P0,"dp_gr_main",G,V,0,1,O);
! 227: % /* P1 executes homogenized computation */
! 228: % ox_cmo_rpc(P1,"dp_gr_main",G,V,1,1,O);
! 229: % map(ox_push_cmd,P,262); /* 262 = OX_popCMO */
! 230: % F = ox_select(P); /* wait for data */
! 231: % /* F[0] is a server's id which is ready */
! 232: % R = ox_get(F[0]);
! 233: % if ( F[0] == P0 ) {
! 234: % Win = "nonhomo"; Lose = P1;
! 235: % } else {
! 236: % Win = "homo"; Lose = P0;
! 237: % }
! 238: % ox_reset(Lose); /* reset the loser */
! 239: % return [Win,R];
! 240: %}
! 241: %\end{verbatim}
! 242:
FreeBSD-CVSweb <freebsd-cvsweb@FreeBSD.org>