source: proiecte/hpl/openmpi_compiled/share/man/man3/MPI_Gatherv.3 @ 97

Last change on this file since 97 was 97, checked in by (none), 14 years ago

Adding compiled files

File size: 12.8 KB
Line 
1.\"Copyright 2006-2008 Sun Microsystems, Inc.
2.\" Copyright (c) 1996 Thinking Machines Corporation
3.TH MPI_Gatherv 3 "Dec 08, 2009" "1.4" "Open MPI"
4.SH NAME
5\fBMPI_Gatherv\fP \- Gathers varying amounts of data from all processes to the root process
6
7.SH SYNTAX
8.ft R
9.SH C Syntax
10.nf
11#include <mpi.h>
12int MPI_Gatherv(void *\fIsendbuf\fP, int\fI sendcount\fP, MPI_Datatype\fI sendtype\fP,
13        void\fI *recvbuf\fP, int\fI *recvcounts\fP, int\fI *displs\fP, MPI_Datatype\fI recvtype\fP,
14        int \fIroot\fP, MPI_Comm\fI comm\fP)
15
16.SH Fortran Syntax
17.nf
18INCLUDE 'mpif.h'
19MPI_GATHERV(\fISENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNTS,
20                DISPLS, RECVTYPE, ROOT, COMM, IERROR\fP)
21        <type>  \fISENDBUF(*), RECVBUF(*)\fP
22        INTEGER \fISENDCOUNT, SENDTYPE, RECVCOUNTS(*), DISPLS(*)\fP
23        INTEGER \fIRECVTYPE, ROOT, COMM, IERROR\fP
24
25.SH C++ Syntax
26.nf
27#include <mpi.h>
28void MPI::Comm::Gatherv(const void* \fIsendbuf\fP, int \fIsendcount\fP,
29        const MPI::Datatype& \fIsendtype\fP, void* \fIrecvbuf\fP,
30        const int \fIrecvcounts\fP[], const int \fIdispls\fP[],
31        const MPI::Datatype& \fIrecvtype\fP, int \fIroot\fP) const = 0
32
33.SH INPUT PARAMETERS
34.ft R
35.TP 1i
36sendbuf
37Starting address of send buffer (choice).
38.TP 1i
39sendcount
40Number of elements in send buffer (integer).
41.TP 1i
42sendtype
43Datatype of send buffer elements (handle).
44.TP 1i
45recvcounts
46Integer array (of length group size) containing the number of elements that
47are received from each process (significant only at root).
48.TP 1i
49displs
50Integer array (of length group size). Entry i specifies the displacement
51relative to recvbuf at which to place the incoming data from process i (significant only at root).
52.TP 1i
53recvtype
54Datatype of recv buffer elements (significant only at root) (handle).
55.TP 1i
56root
57Rank of receiving process (integer).
58.TP 1i
59comm
60Communicator (handle).
61
62.SH OUTPUT PARAMETERS
63.ft R
64.TP 1i
65recvbuf
66Address of receive buffer (choice, significant only at root).
67.ft R
68.TP 1i
69IERROR
70Fortran only: Error status (integer).
71
72.SH DESCRIPTION
73.ft R
74MPI_Gatherv extends the functionality of MPI_Gather by allowing a varying count of data from each process, since recvcounts is now an array. It also allows more flexibility as to where the data is placed on the root, by providing the new argument, displs. 
75.sp
76The outcome is as if each process, including the root process, sends a message to the root,
77.sp
78.nf
79    MPI_Send(sendbuf, sendcount, sendtype, root, \&...)
80.fi
81.sp
82and the root executes n receives, 
83.sp
84.nf
85    MPI_Recv(recvbuf + disp[i] * extent(recvtype), \\
86             recvcounts[i], recvtype, i, \&...)
87.fi
88.sp
89Messages are placed in the receive buffer of the root process in rank order, that is, the data sent from process j is placed in the jth portion of the receive buffer recvbuf on process root. The jth portion of recvbuf begins at offset displs[j] elements (in terms of recvtype) into recvbuf.
90.sp
91The receive buffer is ignored for all nonroot processes. 
92.sp
93The type signature implied by sendcount, sendtype on process i must be equal to the type signature implied by recvcounts[i], recvtype at the root. This implies that the amount of data sent must be equal to the amount of data received, pairwise between each process and the root. Distinct type maps between sender and receiver are still allowed, as illustrated in Example 2, below.
94.sp
95All arguments to the function are significant on process root, while on other processes, only arguments sendbuf, sendcount, sendtype, root, comm are significant. The arguments root and comm must have identical values on all processes.
96.sp
97The specification of counts, types, and displacements should not cause any location on the root to be written more than once. Such a call is erroneous.
98.sp
99\fBExample 1:\fP  Now have each process send 100 ints to root, but place
100each set (of 100) stride ints apart at receiving end. Use MPI_Gatherv and
101the displs argument to achieve this effect. Assume stride >= 100.
102.sp
103.nf
104      MPI_Comm comm;
105      int gsize,sendarray[100];
106      int root, *rbuf, stride;
107      int *displs,i,*rcounts;
108   
109  \&...
110
111      MPI_Comm_size(comm, &gsize);
112      rbuf = (int *)malloc(gsize*stride*sizeof(int));
113      displs = (int *)malloc(gsize*sizeof(int));
114      rcounts = (int *)malloc(gsize*sizeof(int));
115      for (i=0; i<gsize; ++i) {
116          displs[i] = i*stride;
117          rcounts[i] = 100;
118      }
119      MPI_Gatherv(sendarray, 100, MPI_INT, rbuf, rcounts,
120                  displs, MPI_INT, root, comm);
121.fi
122.sp
123Note that the program is erroneous if stride < 100.
124.sp
125\fBExample 2:\fP Same as Example 1 on the receiving side, but send the 100
126ints from the 0th column of a 100 * 150 int array, in C.
127.sp
128.nf
129      MPI_Comm comm;
130      int gsize,sendarray[100][150];
131      int root, *rbuf, stride;
132      MPI_Datatype stype;
133      int *displs,i,*rcounts;
134 
135  \&...
136   
137      MPI_Comm_size(comm, &gsize);
138      rbuf = (int *)malloc(gsize*stride*sizeof(int));
139      displs = (int *)malloc(gsize*sizeof(int));
140      rcounts = (int *)malloc(gsize*sizeof(int));
141      for (i=0; i<gsize; ++i) {
142          displs[i] = i*stride;
143          rcounts[i] = 100;
144      }
145      /* Create datatype for 1 column of array
146       */
147      MPI_Type_vector(100, 1, 150, MPI_INT, &stype);
148      MPI_Type_commit( &stype );
149      MPI_Gatherv(sendarray, 1, stype, rbuf, rcounts,
150                  displs, MPI_INT, root, comm);
151.fi
152.sp
153\fBExample 3:\fP Process i sends (100-i) ints from the ith column of a 100
154x 150 int array, in C. It is received into a buffer with stride, as in the
155previous two examples.
156.sp
157.nf
158      MPI_Comm comm;
159      int gsize,sendarray[100][150],*sptr;
160      int root, *rbuf, stride, myrank;
161      MPI_Datatype stype;
162      int *displs,i,*rcounts;
163
164  \&...
165   
166      MPI_Comm_size(comm, &gsize);
167      MPI_Comm_rank( comm, &myrank );
168      rbuf = (int *)malloc(gsize*stride*sizeof(int));
169      displs = (int *)malloc(gsize*sizeof(int));
170      rcounts = (int *)malloc(gsize*sizeof(int));
171      for (i=0; i<gsize; ++i) {
172          displs[i] = i*stride;
173          rcounts[i] = 100-i;  /* note change from previous example */
174      }
175      /* Create datatype for the column we are sending
176       */
177      MPI_Type_vector(100-myrank, 1, 150, MPI_INT, &stype);
178      MPI_Type_commit( &stype );
179      /* sptr is the address of start of "myrank" column
180       */
181      sptr = &sendarray[0][myrank];
182      MPI_Gatherv(sptr, 1, stype, rbuf, rcounts, displs, MPI_INT,
183         root, comm);
184.fi
185.sp
186Note that a different amount of data is received from each process.
187.sp
188\fBExample 4:\fP Same as Example 3, but done in a different way at the sending end. We create a datatype that causes the correct striding at the sending end so that we read a column of a C array.
189.sp
190.nf
191      MPI_Comm comm;
192      int gsize,sendarray[100][150],*sptr;
193      int root, *rbuf, stride, myrank, disp[2], blocklen[2];
194      MPI_Datatype stype,type[2];
195      int *displs,i,*rcounts;
196   
197  \&...
198   
199      MPI_Comm_size(comm, &gsize);
200      MPI_Comm_rank( comm, &myrank );
201      rbuf = (int *)alloc(gsize*stride*sizeof(int));
202      displs = (int *)malloc(gsize*sizeof(int));
203      rcounts = (int *)malloc(gsize*sizeof(int));
204      for (i=0; i<gsize; ++i) {
205          displs[i] = i*stride;
206          rcounts[i] = 100-i;
207      }
208      /* Create datatype for one int, with extent of entire row
209       */
210      disp[0] = 0;       disp[1] = 150*sizeof(int);
211      type[0] = MPI_INT; type[1] = MPI_UB;
212      blocklen[0] = 1;   blocklen[1] = 1;
213      MPI_Type_struct( 2, blocklen, disp, type, &stype );
214      MPI_Type_commit( &stype );
215      sptr = &sendarray[0][myrank];
216      MPI_Gatherv(sptr, 100-myrank, stype, rbuf, rcounts,
217                  displs, MPI_INT, root, comm);
218.fi
219.sp
220\fBExample 5:\fP Same as Example 3 at sending side, but at receiving side
221we make the  stride between received blocks vary from block to block.
222.sp
223.nf
224      MPI_Comm comm;
225      int gsize,sendarray[100][150],*sptr;
226      int root, *rbuf, *stride, myrank, bufsize;
227      MPI_Datatype stype;
228      int *displs,i,*rcounts,offset;
229
230  \&...
231 
232      MPI_Comm_size( comm, &gsize);
233      MPI_Comm_rank( comm, &myrank );
234 
235  stride = (int *)malloc(gsize*sizeof(int));
236     \&...
237      /* stride[i] for i = 0 to gsize-1 is set somehow
238       */
239  /* set up displs and rcounts vectors first
240       */
241      displs = (int *)malloc(gsize*sizeof(int));
242      rcounts = (int *)malloc(gsize*sizeof(int));
243      offset = 0;
244      for (i=0; i<gsize; ++i) {
245          displs[i] = offset;
246          offset += stride[i];
247          rcounts[i] = 100-i;
248      }
249      /* the required buffer size for rbuf is now easily obtained
250       */
251      bufsize = displs[gsize-1]+rcounts[gsize-1];
252      rbuf = (int *)malloc(bufsize*sizeof(int));
253      /* Create datatype for the column we are sending
254       */
255      MPI_Type_vector(100-myrank, 1, 150, MPI_INT, &stype);
256      MPI_Type_commit( &stype );
257      sptr = &sendarray[0][myrank];
258      MPI_Gatherv(sptr, 1, stype, rbuf, rcounts,
259                  displs, MPI_INT, root, comm);
260.fi
261.sp
262\fBExample 6:\fP Process i sends num ints from the ith column of a 100 x
263150 int array, in C.  The complicating factor is that the various values of num are not known to root, so a separate gather must first be run to find these out. The data is placed contiguously at the receiving end.
264.sp
265.nf
266      MPI_Comm comm;
267      int gsize,sendarray[100][150],*sptr;
268      int root, *rbuf, stride, myrank, disp[2], blocklen[2];
269      MPI_Datatype stype,types[2];
270      int *displs,i,*rcounts,num;
271   
272  \&...
273   
274      MPI_Comm_size( comm, &gsize);
275      MPI_Comm_rank( comm, &myrank );
276 
277  /* First, gather nums to root
278       */
279      rcounts = (int *)malloc(gsize*sizeof(int));
280      MPI_Gather( &num, 1, MPI_INT, rcounts, 1, MPI_INT, root, comm);
281      /* root now has correct rcounts, using these we set
282       * displs[] so that data is placed contiguously (or
283       * concatenated) at receive end
284       */
285      displs = (int *)malloc(gsize*sizeof(int));
286      displs[0] = 0;
287      for (i=1; i<gsize; ++i) {
288          displs[i] = displs[i-1]+rcounts[i-1];
289      }
290      /* And, create receive buffer
291       */
292      rbuf = (int *)malloc(gsize*(displs[gsize-1]+rcounts[gsize-1])
293              *sizeof(int));
294      /* Create datatype for one int, with extent of entire row
295       */
296      disp[0] = 0;       disp[1] = 150*sizeof(int);
297      type[0] = MPI_INT; type[1] = MPI_UB;
298      blocklen[0] = 1;   blocklen[1] = 1;
299      MPI_Type_struct( 2, blocklen, disp, type, &stype );
300      MPI_Type_commit( &stype );
301      sptr = &sendarray[0][myrank];
302      MPI_Gatherv(sptr, num, stype, rbuf, rcounts,
303                  displs, MPI_INT, root, comm);
304.fi
305.SH USE OF IN-PLACE OPTION
306The in-place option operates in the same way as it does for MPI_Gather.  When the communicator is an intracommunicator, you can perform a gather operation in-place (the output buffer is used as the input buffer).  Use the variable MPI_IN_PLACE as the value of the root process \fIsendbuf\fR.  In this case, \fIsendcount\fR and \fIsendtype\fR are ignored, and the contribution of the root process to the gathered vector is assumed to already be in the correct place in the receive buffer. 
307.sp
308Note that MPI_IN_PLACE is a special kind of value; it has the same restrictions on its use as MPI_BOTTOM.
309.sp
310Because the in-place option converts the receive buffer into a send-and-receive buffer, a Fortran binding that includes INTENT must mark these as INOUT, not OUT.   
311.sp
312.SH WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR
313.sp
314When the communicator is an inter-communicator, the root process in the first group gathers data from all the processes in the second group.  The first group defines the root process.  That process uses MPI_ROOT as the value of its \fIroot\fR argument.  The remaining processes use MPI_PROC_NULL as the value of their \fIroot\fR argument.  All processes in the second group use the rank of that root process in the first group as the value of their \fIroot\fR argument.   The send buffer argument of the processes in the first group must be consistent with the receive buffer argument of the root process in the second group.   
315.sp 
316
317.SH ERRORS
318Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI:Exception object.
319.sp
320Before the error value is returned, the current MPI error handler is
321called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error. 
322
323.SH SEE ALSO
324.ft R
325.sp
326.nf
327MPI_Gather
328MPI_Scatter
329MPI_Scatterv
330
Note: See TracBrowser for help on using the repository browser.