source: proiecte/hpl/openmpi_compiled/share/man/man3/MPI_Allreduce.3 @ 97

Last change on this file since 97 was 97, checked in by (none), 14 years ago

Adding compiled files

File size: 5.1 KB
Line 
1.\"Copyright 2007-2008 Sun Microsystems, Inc.
2.\" Copyright (c) 1996 Thinking Machines Corporation
3.TH MPI_Allreduce 3 "Dec 08, 2009" "1.4" "Open MPI"
4.SH NAME
5\fBMPI_Allreduce\fP \- Combines values from all processes and distributes the result back to all processes.
6
7.SH SYNTAX
8.ft R
9.SH C Syntax
10.nf
11#include <mpi.h>
12int MPI_Allreduce(void \fI*sendbuf\fP, void \fI*recvbuf\fP, int\fI count\fP,
13        MPI_Datatype\fI datatype\fP, MPI_Op\fI op\fP, MPI_Comm\fI comm\fP)
14
15.SH Fortran Syntax
16.nf
17INCLUDE 'mpif.h'
18MPI_ALLREDUCE(\fISENDBUF\fP,\fI RECVBUF\fP, \fICOUNT\fP,\fI DATATYPE\fP,\fI OP\fP,
19                \fICOMM\fP, \fIIERROR\fP)
20        <type>  \fISENDBUF\fP(*), \fIRECVBUF\fP(*)
21        INTEGER \fICOUNT\fP,\fI DATATYPE\fP,\fI OP\fP,\fI COMM\fP,\fI IERROR
22
23.SH C++ Syntax
24.nf
25#include <mpi.h>
26void MPI::Comm::Allreduce(const void* \fIsendbuf\fP, void* \fIrecvbuf\fP,
27        int \fIcount\fP, const MPI::Datatype& \fIdatatype\fP, const
28        MPI::Op& \fIop\fP) const=0
29
30.SH INPUT PARAMETERS
31.ft R
32.TP 1i
33sendbuf
34Starting address of send buffer (choice).
35.TP 1i
36count
37Number of elements in send buffer (integer).
38.TP 1i
39datatype
40Datatype of elements of send buffer (handle).
41.TP 1i
42op
43Operation (handle).
44.TP 1i
45comm
46Communicator (handle).
47
48.SH OUTPUT PARAMETERS
49.ft R
50.TP 1i
51recvbuf
52Starting address of receive buffer (choice).
53.ft R
54.TP 1i
55IERROR
56Fortran only: Error status (integer).
57
58.SH DESCRIPTION
59.ft R
60Same as MPI_Reduce except that the result appears in the receive buffer of all the group members.
61.sp
62\fBExample 1:\fR A routine that computes the product of a vector and an array that are distributed across a group of processes and returns the answer at all nodes (compare with Example 2, with MPI_Reduce, below).
63.sp
64.nf
65SUBROUTINE PAR_BLAS2(m, n, a, b, c, comm)
66REAL a(m), b(m,n)    ! local slice of array
67REAL c(n)            ! result
68REAL sum(n)
69INTEGER n, comm, i, j, ierr
70 
71! local sum
72DO j= 1, n
73  sum(j) = 0.0
74  DO i = 1, m
75    sum(j) = sum(j) + a(i)*b(i,j)
76  END DO
77END DO
78 
79! global sum
80CALL MPI_ALLREDUCE(sum, c, n, MPI_REAL, MPI_SUM, comm, ierr)
81 
82! return result at all nodes
83RETURN
84.fi
85.sp
86\fBExample 2:\fR A routine that computes the product of a vector and an array that are distributed across a group of processes and returns the answer at node zero.
87.sp
88.nf
89SUBROUTINE PAR_BLAS2(m, n, a, b, c, comm)
90REAL a(m), b(m,n)    ! local slice of array
91REAL c(n)            ! result
92REAL sum(n)
93INTEGER n, comm, i, j, ierr
94 
95! local sum
96DO j= 1, n
97  sum(j) = 0.0
98  DO i = 1, m
99    sum(j) = sum(j) + a(i)*b(i,j)
100  END DO
101END DO
102
103! global sum
104CALL MPI_REDUCE(sum, c, n, MPI_REAL, MPI_SUM, 0, comm, ierr)
105 
106! return result at node zero (and garbage at the other nodes)
107RETURN
108.fi
109.SH USE OF IN-PLACE OPTION
110When the communicator is an intracommunicator, you can perform an all-reduce operation in-place (the output buffer is used as the input buffer).  Use the variable MPI_IN_PLACE as the value of \fIsendbuf\fR at all processes.
111.sp
112Note that MPI_IN_PLACE is a special kind of value; it has the same restrictions on its use as MPI_BOTTOM.
113.sp
114Because the in-place option converts the receive buffer into a send-and-receive buffer, a Fortran binding that includes INTENT must mark these as INOUT, not OUT.   
115.sp
116.SH WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR
117When the communicator is an inter-communicator, the reduce operation occurs in two phases.  The data is reduced from all the members of the first group and received by all the members of the second group.  Then the data is reduced from all the members of the second group and received by all the members of the first.  The operation exhibits a symmetric, full-duplex behavior. 
118.sp
119The first group defines the root process.  The root process uses MPI_ROOT as the value of \fIroot\fR.  All other processes in the first group use MPI_PROC_NULL as the value of \fIroot\fR.  All processes in the second group use the rank of the root process in the first group as the value of \fIroot\fR.
120.sp
121When the communicator is an intra-communicator, these groups are the same, and the operation occurs in a single phase.
122.SH NOTES ON COLLECTIVE OPERATIONS
123
124The reduction functions (
125.I MPI_Op
126) do not return an error value.  As a result,
127if the functions detect an error, all they can do is either call
128.I MPI_Abort
129or silently skip the problem.  Thus, if you change the error handler from
130.I MPI_ERRORS_ARE_FATAL
131to something else, for example,
132.I MPI_ERRORS_RETURN
133,
134then no error may be indicated.
135
136.SH ERRORS
137Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI:Exception object.
138.sp
139Before the error value is returned, the current MPI error handler is
140called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler
141may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.
142
143
Note: See TracBrowser for help on using the repository browser.