[BACK]Return to dagb-noro.tex CVS log [TXT][DIR] Up to [local] / OpenXM / doc / Papers

Diff for /OpenXM/doc/Papers/Attic/dagb-noro.tex between version 1.1 and 1.2

version 1.1, 2001/10/03 08:32:58 version 1.2, 2001/10/04 04:12:29
Line 1 
Line 1 
 % $OpenXM$  % $OpenXM: OpenXM/doc/Papers/dagb-noro.tex,v 1.1 2001/10/03 08:32:58 noro Exp $
 \setlength{\parskip}{10pt}  \setlength{\parskip}{10pt}
   
 \begin{slide}{}  \begin{slide}{}
   \begin{center}
   \fbox{\large Part I : Overview and history of Risa/Asir}
   \end{center}
   \end{slide}
   
   \begin{slide}{}
 \fbox{A computer algebra system Risa/Asir}  \fbox{A computer algebra system Risa/Asir}
   
 \begin{itemize}  \begin{itemize}
Line 25 
Line 31 
 \item Whole source tree is available via CVS  \item Whole source tree is available via CVS
 \end{itemize}  \end{itemize}
   
 \item OpenXM interface  \item OpenXM ((Open message eXchange protocol for Mathematics) interface
   
 \begin{itemize}  \begin{itemize}
 \item As a client : can call procedures on other OpenXM servers  \item As a client : can call procedures on other OpenXM servers
Line 57  representation
Line 63  representation
 \item Groebner basis computation  \item Groebner basis computation
   
 \begin{itemize}  \begin{itemize}
 \item Buchberger and $F_4$ algorithm  \item Buchberger and $F_4$ [Faug\'ere] algorithm
   
 \item Change of ordering/RUR of 0-dimensional ideals  \item Change of ordering/RUR [Rouillier] of 0-dimensional ideals
   
 \item Primary ideal decomposition  \item Primary ideal decomposition
   
 \item Computation of $b$-function  \item Computation of $b$-function
 \end{itemize}  \end{itemize}
   
 \item PARI library interface  \item PARI [PARI] library interface
   
 \item Paralell distributed computation under OpenXM  \item Paralell distributed computation under OpenXM
 \end{itemize}  \end{itemize}
Line 83  Several subroutines were developed for a Prolog progra
Line 89  Several subroutines were developed for a Prolog progra
 \item 1989--1992  \item 1989--1992
   
 \begin{itemize}  \begin{itemize}
 \item Reconfigured as Risa/Asir with the parser and Boehm's conservative GC.  \item Reconfigured as Risa/Asir with a parser and Boehm's conservative GC [Boehm]
   
 \item Developed univariate and multivariate factorizers over the rationals.  \item Developed univariate and multivariate factorizers over the rationals.
 \end{itemize}  \end{itemize}
Line 91  Several subroutines were developed for a Prolog progra
Line 97  Several subroutines were developed for a Prolog progra
 \item 1992--1994  \item 1992--1994
   
 \begin{itemize}  \begin{itemize}
 \item Started implementation of Groebner basis computation  \item Started implementation of Buchberger algorithm
   
 User language $\Rightarrow$ rewritten in C (by Murao) $\Rightarrow$  Written in user language $\Rightarrow$ rewritten in C (by Murao)
 trace lifting  
   
   $\Rightarrow$ trace lifting [Traverso]
   
 \item Univariate factorization over algebraic number fields  \item Univariate factorization over algebraic number fields
   
 Intensive use of successive extension, non-squarefree norms  Intensive use of successive extension, non-squarefree norms
Line 108  Intensive use of successive extension, non-squarefree 
Line 115  Intensive use of successive extension, non-squarefree 
 \fbox{History of development : 1994-1996}  \fbox{History of development : 1994-1996}
   
 \begin{itemize}  \begin{itemize}
 \item Free distribution of binary versions  \item Free distribution of binary versions from Fujitsu
   
 \item Primary ideal decomposition  \item Primary ideal decomposition
   
 \begin{itemize}  \begin{itemize}
 \item Shimoyama-Yokoyama algorithm  \item Shimoyama-Yokoyama algorithm [SY]
 \end{itemize}  \end{itemize}
   
 \item Improvement of Buchberger algorithm  \item Improvement of Buchberger algorithm
Line 125  Intensive use of successive extension, non-squarefree 
Line 132  Intensive use of successive extension, non-squarefree 
   
 \item Modular change of ordering, Modular RUR  \item Modular change of ordering, Modular RUR
   
 \item Noro met Faug\`ere at RISC-Linz and he mentioned $F_4$.  These are joint works with Yokoyama [NY]
 \end{itemize}  \end{itemize}
 \end{itemize}  \end{itemize}
   
Line 148  Intensive use of successive extension, non-squarefree 
Line 155  Intensive use of successive extension, non-squarefree 
   
 \item Its parallelization by the above facility  \item Its parallelization by the above facility
   
 \item Application : computation of odd order replicable functions  \item Computation of odd order replicable functions [Noro]
   
 Risa/Asir : it took 5days to compute a DRL basis ({\it McKay})  Risa/Asir : it took 5days to compute a DRL basis ({\it McKay})
   
 From Faug\`ere : computation of the DRL basis 53sec  Faug\`ere FGb : computation of the DRL basis 53sec
 \end{itemize}  \end{itemize}
   
   
Line 161  From Faug\`ere : computation of the DRL basis 53sec
Line 168  From Faug\`ere : computation of the DRL basis 53sec
 \begin{itemize}  \begin{itemize}
 \item To implement Schoof-Elkies-Atkin algorithm  \item To implement Schoof-Elkies-Atkin algorithm
   
 Counting rational points on elliptic curves --- not free  Counting rational points on elliptic curves
   
 But related functions are freely available  --- not free But related functions are freely available
 \end{itemize}  \end{itemize}
 \end{itemize}  \end{itemize}
   
Line 177  But related functions are freely available
Line 184  But related functions are freely available
 \begin{itemize}  \begin{itemize}
 \item OpenXM specification was written by Noro and Takayama  \item OpenXM specification was written by Noro and Takayama
   
   Borrowed idea on encoding, phrase book from OpenMath [OpenMath]
   
 \item Functions for distributed computation were rewritten  \item Functions for distributed computation were rewritten
 \end{itemize}  \end{itemize}
   
Line 191  Written in Visual C++
Line 200  Written in Visual C++
 \item Test implementation of $F_4$  \item Test implementation of $F_4$
   
 \begin{itemize}  \begin{itemize}
   \item Implemented according to [Faug\`ere]
   
 \item Over $GF(p)$ : pretty good  \item Over $GF(p)$ : pretty good
   
 \item Over the rationals : not so good except for {\it McKay}  \item Over the rationals : not so good except for {\it McKay}
Line 204  Written in Visual C++
Line 215  Written in Visual C++
 \item The source code is freely available  \item The source code is freely available
   
 \begin{itemize}  \begin{itemize}
 \item Noro moved from Fujitsu to Kobe university.  \item Noro moved from Fujitsu to Kobe university
   
 \item Fujitsu kindly permitted to make Risa/Asir open source.  Started Kobe branch [Risa/Asir]
 \end{itemize}  \end{itemize}
   
 \item OpenXM  \item OpenXM [OpenXM]
   
 \begin{itemize}  \begin{itemize}
 \item Revising the specification : OX-RFC100, 101, (102)  \item Revising the specification : OX-RFC100, 101, (102)
   
 \item OX-RFC102 : ommunications between servers via MPI  \item OX-RFC102 : communications between servers via MPI
 \end{itemize}  \end{itemize}
   
 \item Rings of differential operators  \item Rings of differential operators
   
 \begin{itemize}  \begin{itemize}
 \item Buchberger algorithm  \item Buchberger algorithm [Takayama]
   
 \item $b$-function computation  \item $b$-function computation [OT]
   
 Minimal polynomial computation by modular method  Minimal polynomial computation by modular method
 \end{itemize}  \end{itemize}
Line 237  Minimal polynomial computation by modular method
Line 248  Minimal polynomial computation by modular method
 \item 10 years ago  \item 10 years ago
   
 its performace was fine compared with existing software  its performace was fine compared with existing software
 like REDUCE, Maple, Mathematica.  like REDUCE, Mathematica.
   
 \item 4 years ago  \item 4 years ago
   
Line 250  derived from norms.
Line 261  derived from norms.
 Multivariate : not so bad  Multivariate : not so bad
   
 Univariate : completely obsolete by M. van Hoeij's new algorithm  Univariate : completely obsolete by M. van Hoeij's new algorithm
   [Hoeij]
 \end{itemize}  \end{itemize}
   
 \end{slide}  \end{slide}
Line 286  monomial and polynomial representation.
Line 298  monomial and polynomial representation.
 \end{slide}  \end{slide}
   
 \begin{slide}{}  \begin{slide}{}
   \fbox{OpenXM}
   
   \begin{itemize}
   \item An environment for parallel distributed computation
   
   Both for interactive, non-interactive environment
   
   \item Message passing
   
   OX (OpenXM) message : command and data
   
   \item Hybrid command execution
   
   \begin{itemize}
   \item Stack machine command
   
   push, pop, function execution, $\ldots$
   
   \item accepts its own command sequences
   
   {\tt execute\_string} --- easy to use
   \end{itemize}
   
   \item Data is represented as CMO
   
   CMO (Common Mathematical Object format)
   
   --- Serialized representation of mathematical object
   
   {\sl Integer32}, {\sl Cstring}, {\sl List}, {\sl ZZ}, $\ldots$
   \end{itemize}
   \end{slide}
   
   
   \begin{slide}{}
   \fbox{OpenXM and OpenMath}
   
   \begin{itemize}
   \item OpenMath
   
   \begin{itemize}
   \item A standard for representing mathematical objects
   
   \item CD (Content Dictionary) : assigns semantics to symbols
   
   \item Phrasebook : convesion between internal and OpenMath objects.
   
   \item Encoding : format for actual data exchange
   \end{itemize}
   
   \item OpenXM
   
   \begin{itemize}
   \item Specification for encoding and exchanging messages
   
   \item It also specifies behavior of servers and session management
   \end{itemize}
   
   \end{itemize}
   \end{slide}
   
   \begin{slide}{}
   \fbox{OpenXM server interface in Risa/Asir}
   
   \begin{itemize}
   \item TCP/IP stream
   
   \begin{itemize}
   \item Launcher
   
   A client executes a launcher on a host.
   
   The launcher launches a server on the same host.
   
   \item Server
   
   A server reads from the descriptor 3, write to the descriptor 4.
   
   \end{itemize}
   
   \item Subroutine call
   
   Risa/Asir subroutine library provides interfaces corresponding to
   pushing and popping data and executing stack commands.
   \end{itemize}
   \end{slide}
   
   \begin{slide}{}
   \fbox{OpenXM client interface in Risa/Asir}
   
   \begin{itemize}
   \item Primitive interface functions
   
   Pushing and popping data, sending commands etc.
   
   \item Convenient functions
   
   Launching servers, calling remote functions,
    interrupting remote executions etc.
   
   \item Parallel distributed computation is easy
   
   Simple parallelization is practically important
   
   Competitive computation is easily realized
   \end{itemize}
   \end{slide}
   
   
   %\begin{slide}{}
   %\fbox{CMO = Serialized representation of mathematical object}
   %
   %\begin{itemize}
   %\item primitive data
   %\begin{eqnarray*}
   %\mbox{Integer32} &:& ({\tt CMO\_INT32}, {\sl int32}\ \mbox{n}) \\
   %\mbox{Cstring}&:& ({\tt CMO\_STRING},{\sl int32}\,  \mbox{ n}, {\sl string}\, \mbox{s}) \\
   %\mbox{List} &:& ({\tt CMO\_LIST}, {\sl int32}\, len, ob[0], \ldots,ob[m-1])
   %\end{eqnarray*}
   %
   %\item numbers and polynomials
   %\begin{eqnarray*}
   %\mbox{ZZ}         &:& ({\tt CMO\_ZZ},{\sl int32}\, {\rm f}, {\sl byte}\, \mbox{a[1]}, \ldots
   %{\sl byte}\, \mbox{a[$|$f$|$]} ) \\
   %\mbox{Monomial32}&:& ({\tt CMO\_MONOMIAL32}, n, \mbox{e[1]}, \ldots, \mbox{e[n]}, \mbox{Coef}) \\
   %\mbox{Coef}&:& \mbox{ZZ} | \mbox{Integer32} \\
   %\mbox{Dpolynomial}&:& ({\tt CMO\_DISTRIBUTED\_POLYNOMIAL},\\
   %                  & & m, \mbox{DringDefinition}, \mbox{Monomial32}, \ldots)\\
   %\mbox{DringDefinition}
   %                  &:& \mbox{DMS of N variables} \\
   %                  & & ({\tt CMO\_RING\_BY\_NAME}, name) \\
   %                  & & ({\tt CMO\_DMS\_GENERIC}) \\
   %\end{eqnarray*}
   %\end{itemize}
   %\end{slide}
   %
   %\begin{slide}{}
   %\fbox{Stack based communication}
   %
   %\begin{itemize}
   %\item Data arrived a client
   %
   %Pushed to the stack
   %
   %\item Result
   %
   %Pushd to the stack
   %
   %Written to the stream when requested by a command
   %
   %\item The reason why we use the stack
   %
   %\begin{itemize}
   %\item Stack = I/O buffer for (possibly large) objects
   %
   %Multiple requests can be sent before their exection
   %
   %A server does not get stuck in sending results
   %\end{itemize}
   %\end{itemize}
   %\end{slide}
   
   \begin{slide}{}
   \fbox{Executing functions on a server (I) --- {\tt SM\_executeFunction}}
   
   \begin{enumerate}
   \item (C $\rightarrow$ S) Arguments are sent in binary encoded form.
   \item (C $\rightarrow$ S) The number of aruments is sent as {\sl Integer32}.
   \item (C $\rightarrow$ S) A function name is sent as {\sl Cstring}.
   \item (C $\rightarrow$ S) A command {\tt SM\_executeFunction} is sent.
   \item The result is pushed to the stack.
   \item (C $\rightarrow$ S) A command {\tt SM\_popCMO} is sent.
   \item (S $\rightarrow$ C) The result is sent in binary encoded form.
   \end{enumerate}
   
   $\Rightarrow$ Communication is fast, but functions for binary data
   conversion are necessary.
   \end{slide}
   
   \begin{slide}{}
   \fbox{Executing functions on a server (II) --- {\tt SM\_executeString}}
   
   \begin{enumerate}
   \item (C $\rightarrow$ S) A character string represeting a request in a server's
   user language is sent as {\sl Cstring}.
   \item (C $\rightarrow$ S) A command {\tt SM\_executeString} is sent.
   \item The result is pushed to the stack.
   \item (C $\rightarrow$ S) A command {\tt SM\_popString} is sent.
   \item (S $\rightarrow$ C) The result is sent in readable form.
   \end{enumerate}
   
   $\Rightarrow$ Communication may be slow, but the client parser may be
   enough to read the result.
   \end{slide}
   
   \begin{slide}{}
   \fbox{Example of distributed computation --- $F_4$ vs. $Buchberger$ }
   
   \begin{verbatim}
   /* competitive Gbase computation over GF(M) */
   /* Cf. A.28 in SINGULAR Manual */
   /* Process list is specified as an option : grvsf4(...|proc=P) */
   def grvsf4(G,V,M,O)
   {
     P = getopt(proc);
     if ( type(P) == -1 ) return dp_f4_mod_main(G,V,M,O);
     P0 = P[0]; P1 = P[1]; P = [P0,P1];
     map(ox_reset,P);
     ox_cmo_rpc(P0,"dp_f4_mod_main",G,V,M,O);
     ox_cmo_rpc(P1,"dp_gr_mod_main",G,V,0,M,O);
     map(ox_push_cmd,P,262); /* 262 = OX_popCMO */
     F = ox_select(P); R = ox_get(F[0]);
     if ( F[0] == P0 ) { Win = "F4"; Lose = P1;}
     else { Win = "Buchberger"; Lose = P0; }
     ox_reset(Lose); /* simply resets the loser */
     return [Win,R];
   }
   \end{verbatim}
   \end{slide}
   
   \begin{slide}{}
   \fbox{References}
   
   [Bernardin] L. Bernardin, On square-free factorization of
   multivariate polynomials over a finite field, Theoretical
   Computer Science 187 (1997), 105-116.
   
   [Boehm] {\tt http://www.hpl.hp.com/personal/Hans\_Boehm/gc}
   
   [Faug\`ere] J.C. Faug\`ere,
   A new efficient algorithm for computing Groebner bases  ($F_4$),
   Journal of Pure and Applied Algebra (139) 1-3 (1999), 61-88.
   
   [Hoeij] M. van Heoij, Factoring polynomials and the knapsack problem,
   to appear in Journal of Number Theory (2000).
   
   [SY] T. Shimoyama, K. Yokoyama, Localization and Primary Decomposition of Polynomial Ideals.  J. Symb. Comp. {\bf 22} (1996), 247-277.
   
   [NY] M. Noro, K. Yokoyama,
   A Modular Method to Compute the Rational Univariate
   Representation of Zero-Dimensional Ideals.
   J. Symb. Comp. {\bf 28}/1 (1999), 243-263.
   
   [OpenMath] {\tt http://www.openmath.org}
   
   [OpenXM] {\tt http://www.openxm.org}
   
   [PARI] {\tt http://www.parigp-home.de}
   
   [Risa/Asir] {\tt http://www.math.kobe-u.ac.jp/Asir/asir.html}
   
   [Rouillier] F. Rouillier,
   R\'esolution des syst\`emes z\'ero-dimensionnels.
   Doctoral Thesis(1996), University of Rennes I, France.
   
   [Traverso] C. Traverso, \gr trace algorithms. Proc. ISSAC '88 (LNCS 358), 125-138.
   
   \end{slide}
   
   \begin{slide}{}
   \begin{center}
   \fbox{\large Part II : Algorithms and implementations in Risa/Asir}
   \end{center}
   \end{slide}
   
   \begin{slide}{}
 \fbox{Ground fields}  \fbox{Ground fields}
   
 \begin{itemize}  \begin{itemize}
Line 340  DDF + Cantor-Zassenhaus; FFT for large finite fields
Line 618  DDF + Cantor-Zassenhaus; FFT for large finite fields
   
 Classical EZ algorithm  Classical EZ algorithm
   
 \item Over finite fieds  \item Over small finite fieds
   
 Modified Bernardin square free, bivariate Hensel  Modified Bernardin's square free algorithm [Bernardin],
   
   possibly Hensel lifting over extension fields
 \end{itemize}  \end{itemize}
   
 \end{itemize}  \end{itemize}
Line 370  Homogenization+guess+dehomogenization+check
Line 650  Homogenization+guess+dehomogenization+check
 \begin{itemize}  \begin{itemize}
 \item Groebner basis of a left ideal  \item Groebner basis of a left ideal
   
 An efficient implementation of Leibniz rule  Key : an efficient implementation of Leibniz rule
 \end{itemize}  \end{itemize}
   
 \end{itemize}  \end{itemize}
Line 381  An efficient implementation of Leibniz rule
Line 661  An efficient implementation of Leibniz rule
 \begin{itemize}  \begin{itemize}
 \item Over small finite fields ($GF(p)$, $p < 2^{30}$)  \item Over small finite fields ($GF(p)$, $p < 2^{30}$)
 \begin{itemize}  \begin{itemize}
 \item More efficient than Buchberger algorithm  \item More efficient than our Buchberger algorithm implementation
   
 but less efficient than FGb by Faugere  but less efficient than FGb by Faugere
 \end{itemize}  \end{itemize}
Line 391  but less efficient than FGb by Faugere
Line 671  but less efficient than FGb by Faugere
 \begin{itemize}  \begin{itemize}
 \item Very naive implementation  \item Very naive implementation
   
   Modular computation + CRT + Checking the result at each degree
   
 \item Less efficient than Buchberger algorithm  \item Less efficient than Buchberger algorithm
   
 except for one example  except for one example (={\it McKay})
 \end{itemize}  \end{itemize}
   
 \end{itemize}  \end{itemize}
 \end{slide}  \end{slide}
   
 \begin{slide}{}  \begin{slide}{}
 \fbox{Change of ordering for zero-dimimensional ideals}  \fbox{Change of ordering for zero-dimensional ideals}
   
 \begin{itemize}  \begin{itemize}
 \item Any ordering to lex ordering  \item Any ordering to lex ordering
Line 494  evaluated by {\tt eval()}
Line 776  evaluated by {\tt eval()}
   
 The knapsack factorization is available via {\tt pari(factor,{\it poly})}  The knapsack factorization is available via {\tt pari(factor,{\it poly})}
 \end{itemize}  \end{itemize}
   
   
 \end{itemize}  \end{itemize}
 \end{slide}  
   
 \begin{slide}{}  
 \fbox{OpenXM}  
   
 \begin{itemize}  
 \item An environment for parallel distributed computation  
   
 Both for interactive, non-interactive environment  
   
 \item Message passing  
   
 OX (OpenXM) message : command and data  
   
 \item Hybrid command execution  
   
 \begin{itemize}  
 \item Stack machine command  
   
 push, pop, function execution, $\ldots$  
   
 \item accepts its own command sequences  
   
 {\tt execute\_string} --- easy to use  
 \end{itemize}  
   
 \item Data is represented as CMO  
   
 CMO --- Common Mathematical Object format  
 \end{itemize}  
 \end{slide}  
   
 \begin{slide}{}  
 \fbox{OpenXM server interface in Risa/Asir}  
   
 \begin{itemize}  
 \item TCP/IP stream  
   
 \begin{itemize}  
 \item Launcher  
   
 A client executes a launcher on a host.  
   
 The launcher launches a server on the same host.  
   
 \item Server  
   
 A server reads from the descriptor 3, write to the descriptor 4.  
   
 \end{itemize}  
   
 \item Subroutine call  
   
 Risa/Asir subroutine library provides interfaces corresponding to  
 pushing and popping data and executing stack commands.  
 \end{itemize}  
 \end{slide}  
   
 \begin{slide}{}  
 \fbox{OpenXM client interface in Risa/Asir}  
   
 \begin{itemize}  
 \item Primitive interface functions  
   
 Pushing and popping data, sending commands etc.  
   
 \item Convenient functions  
   
 Launching servers, calling remote functions,  
  interrupting remote executions etc.  
   
 \item Parallel distributed computation is easy  
   
 Simple parallelization is practically important  
   
 Competitive computation is easily realized  
 \end{itemize}  
 \end{slide}  
   
   
 \begin{slide}{}  
 \fbox{CMO = Serialized representation of mathematical object}  
   
 \begin{itemize}  
 \item primitive data  
 \begin{eqnarray*}  
 \mbox{Integer32} &:& ({\tt CMO\_INT32}, {\sl int32}\ \mbox{n}) \\  
 \mbox{Cstring}&:& ({\tt CMO\_STRING},{\sl int32}\,  \mbox{ n}, {\sl string}\, \mbox{s}) \\  
 \mbox{List} &:& ({\tt CMO\_LIST}, {\sl int32}\, len, ob[0], \ldots,ob[m-1])  
 \end{eqnarray*}  
   
 \item numbers and polynomials  
 \begin{eqnarray*}  
 \mbox{ZZ}         &:& ({\tt CMO\_ZZ},{\sl int32}\, {\rm f}, {\sl byte}\, \mbox{a[1]}, \ldots  
 {\sl byte}\, \mbox{a[$|$f$|$]} ) \\  
 \mbox{Monomial32}&:& ({\tt CMO\_MONOMIAL32}, n, \mbox{e[1]}, \ldots, \mbox{e[n]}, \mbox{Coef}) \\  
 \mbox{Coef}&:& \mbox{ZZ} | \mbox{Integer32} \\  
 \mbox{Dpolynomial}&:& ({\tt CMO\_DISTRIBUTED\_POLYNOMIAL},\\  
                   & & m, \mbox{DringDefinition}, \mbox{Monomial32}, \ldots)\\  
 \mbox{DringDefinition}  
                  &:& \mbox{DMS of N variables} \\  
                  & & ({\tt CMO\_RING\_BY\_NAME}, name) \\  
                  & & ({\tt CMO\_DMS\_GENERIC}) \\  
 \end{eqnarray*}  
 \end{itemize}  
 \end{slide}  
   
 \begin{slide}{}  
 \fbox{Stack based communication}  
   
 \begin{itemize}  
 \item Data arrived a client  
   
 Pushed to the stack  
   
 \item Result  
   
 Pushd to the stack  
   
 Written to the stream when requested by a command  
   
 \item The reason why we use the stack  
   
 \begin{itemize}  
 \item Stack = I/O buffer for (possibly large) objects  
   
 Multiple requests can be sent before their exection  
   
 A server does not get stuck in sending results  
 \end{itemize}  
 \end{itemize}  
 \end{slide}  
   
 \begin{slide}{}  
 \fbox{Executing functions on a server (I) --- {\tt SM\_executeFunction}}  
   
 \begin{enumerate}  
 \item (C $\rightarrow$ S) Arguments are sent in binary encoded form.  
 \item (C $\rightarrow$ S) The number of aruments is sent as {\sl Integer32}.  
 \item (C $\rightarrow$ S) A function name is sent as {\sl Cstring}.  
 \item (C $\rightarrow$ S) A command {\tt SM\_executeFunction} is sent.  
 \item The result is pushed to the stack.  
 \item (C $\rightarrow$ S) A command {\tt SM\_popCMO} is sent.  
 \item (S $\rightarrow$ C) The result is sent in binary encoded form.  
 \end{enumerate}  
   
 $\Rightarrow$ Communication is fast, but functions for binary data  
 conversion are necessary.  
 \end{slide}  
   
 \begin{slide}{}  
 \fbox{Executing functions on a server (II) --- {\tt SM\_executeString}}  
   
 \begin{enumerate}  
 \item (C $\rightarrow$ S) A character string represeting a request in a server's  
 user language is sent as {\sl Cstring}.  
 \item (C $\rightarrow$ S) A command {\tt SM\_executeString} is sent.  
 \item The result is pushed to the stack.  
 \item (C $\rightarrow$ S) A command {\tt SM\_popString} is sent.  
 \item (S $\rightarrow$ C) The result is sent in readable form.  
 \end{enumerate}  
   
 $\Rightarrow$ Communication may be slow, but the client parser may be  
 enough to read the result.  
 \end{slide}  
   
 \begin{slide}{}  
 \fbox{Example of distributed computation --- $F_4$ vs. $Buchberger$ }  
   
 \begin{verbatim}  
 /* competitive Gbase computation over GF(M) */  
 /* Cf. A.28 in SINGULAR Manual */  
 /* Process list is specified as an option : grvsf4(...|proc=P) */  
 def grvsf4(G,V,M,O)  
 {  
   P = getopt(proc);  
   if ( type(P) == -1 ) return dp_f4_mod_main(G,V,M,O);  
   P0 = P[0]; P1 = P[1]; P = [P0,P1];  
   map(ox_reset,P);  
   ox_cmo_rpc(P0,"dp_f4_mod_main",G,V,M,O);  
   ox_cmo_rpc(P1,"dp_gr_mod_main",G,V,0,M,O);  
   map(ox_push_cmd,P,262); /* 262 = OX_popCMO */  
   F = ox_select(P); R = ox_get(F[0]);  
   if ( F[0] == P0 ) { Win = "F4"; Lose = P1;}  
   else { Win = "Buchberger"; Lose = P0; }  
   ox_reset(Lose); /* simply resets the loser */  
   return [Win,R];  
 }  
 \end{verbatim}  
   
 \end{slide}  
   
 \begin{slide}{}  
 \end{slide}  \end{slide}
   
 \begin{slide}{}  \begin{slide}{}

Legend:
Removed from v.1.1  
changed lines
  Added in v.1.2

FreeBSD-CVSweb <freebsd-cvsweb@FreeBSD.org>