source: proiecte/hpl/openmpi_compiled/share/man/man3/MPI_Scatterv.3 @ 97

Last change on this file since 97 was 97, checked in by (none), 14 years ago

Adding compiled files

File size: 7.6 KB
Line 
1.\"Copyright 2006-2008 Sun Microsystems, Inc.
2.\" Copyright (c) 1996 Thinking Machines Corporation
3.TH MPI_Scatterv 3 "Dec 08, 2009" "1.4" "Open MPI"
4.SH NAME
5\fBMPI_Scatterv\fP \- Scatters a buffer in parts to all tasks in a group.
6
7.SH SYNTAX
8.ft R
9.SH C Syntax
10.nf
11#include <mpi.h>
12int MPI_Scatterv(void *\fIsendbuf\fP, int\fI *sendcounts\fP, int\fI *displs\fP,
13        MPI_Datatype\fI sendtype\fP, void\fI *recvbuf\fP, int\fI recvcount\fP,
14        MPI_Datatype\fI recvtype\fP, int\fI root\fP, MPI_Comm\fI comm\fP)
15
16.SH Fortran Syntax
17.nf
18INCLUDE 'mpif.h'
19MPI_SCATTERV(\fISENDBUF, SENDCOUNTS, DISPLS, SENDTYPE, RECVBUF,
20                RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR\fP)
21        <type>  \fISENDBUF(*), RECVBUF(*)\fP
22        INTEGER \fISENDCOUNTS(*), DISPLS(*), SENDTYPE\fP
23        INTEGER \fIRECVCOUNT, RECVTYPE, ROOT, COMM, IERROR\fP
24
25.SH C++ Syntax
26.nf
27#include <mpi.h>
28void MPI::Comm::Scatterv(const void* \fIsendbuf\fP, const int \fIsendcounts\fP[],
29        const int \fIdispls\fP[], const MPI::Datatype& \fIsendtype\fP,
30        void* \fIrecvbuf\fP, int \fIrecvcount\fP, const MPI::Datatype&
31        \fIrecvtype\fP, int \fIroot\fP) const
32
33.SH INPUT PARAMETERS
34.ft R
35.TP 1i
36sendbuf
37Address of send buffer (choice, significant only at root).
38.TP 1i
39sendcounts
40Integer array (of length group size) specifying the number of elements to
41send to each processor.
42.TP 1i
43displs
44Integer array (of length group size). Entry i specifies the displacement
45(relative to sendbuf) from which to take the outgoing data to process i.
46.TP 1i
47sendtype
48Datatype of send buffer elements (handle).
49.TP 1i
50recvcount
51Number of elements in receive buffer (integer).
52.TP 1i
53recvtype
54Datatype of receive buffer elements (handle).
55.TP 1i
56root
57Rank of sending process (integer).
58.TP 1i
59comm
60Communicator (handle).
61
62.SH OUTPUT PARAMETERS
63.ft R
64.TP 1i
65recvbuf
66Address of receive buffer (choice).
67.ft R
68.TP 1i
69IERROR
70Fortran only: Error status (integer).
71
72.SH DESCRIPTION
73.ft R
74MPI_Scatterv is the inverse operation to MPI_Gatherv.
75.sp
76MPI_Scatterv extends the functionality of MPI_Scatter by allowing a varying
77count of data to be sent to each process, since \fIsendcounts\fP is now an array.
78It also allows more flexibility as to where the data is taken from on the
79root, by providing the new argument, \fIdispls\fP.
80.sp
81The outcome is as if the root executed \fIn\fP send operations,
82.sp
83.nf
84    MPI_Send(\fIsendbuf\fP + \fIdispls\fP[\fIi\fP] * \fIextent\fP(\fIsendtype\fP), \\
85             \fIsendcounts\fP[i], \fIsendtype\fP, \fIi\fP, \&...)
86
87and each process executed a receive,
88
89    MPI_Recv(\fIrecvbuf\fP, \fIrecvcount\fP, \fIrecvtype\fP, \fIroot\fP, \&...)
90
91The send buffer is ignored for all nonroot processes.
92.fi
93.sp
94The type signature implied by \fIsendcount\fP[\fIi\fP], \fIsendtype\fP at the root must be
95equal to the type signature implied by \fIrecvcount\fP, \fIrecvtype\fP at process \fIi\fP
96(however, the type maps may be different). This implies that the amount of
97data sent must be equal to the amount of data received, pairwise between
98each process and the root. Distinct type maps between sender and receiver
99are still allowed.
100.sp
101All arguments to the function are significant on process \fIroot\fP, while on
102other processes, only arguments \fIrecvbuf\fP, \fIrecvcount\fP, \fIrecvtype\fP, \fIroot\fP, \fIcomm\fP
103are significant. The arguments \fIroot\fP and \fIcomm\fP must have identical values on
104all processes.
105.sp
106The specification of counts, types, and displacements should not cause any
107location on the root to be read more than once.
108.sp
109\fBExample 1:\fR The reverse of Example 5 in the MPI_Gatherv manpage. We
110have a varying stride between blocks at sending (root) side, at the
111receiving side we receive 100 - \fIi\fP elements into the \fIi\fPth column of a 100 x 150 C array at process \fIi\fP.
112.sp
113.nf
114    MPI_Comm comm;
115        int gsize,recvarray[100][150],*rptr;
116        int root, *sendbuf, myrank, bufsize, *stride;
117        MPI_Datatype rtype;
118        int i, *displs, *scounts, offset;
119        \&...
120        MPI_Comm_size( comm, &gsize);
121        MPI_Comm_rank( comm, &myrank );
122     
123        stride = (int *)malloc(gsize*sizeof(int));
124        \&...
125        /* stride[i] for i = 0 to gsize-1 is set somehow
126         * sendbuf comes from elsewhere
127         */
128        \&...
129        displs = (int *)malloc(gsize*sizeof(int));
130        scounts = (int *)malloc(gsize*sizeof(int));
131        offset = 0;
132        for (i=0; i<gsize; ++i) {
133            displs[i] = offset;
134            offset += stride[i];
135            scounts[i] = 100 - i;
136        }
137        /* Create datatype for the column we are receiving
138         */
139        MPI_Type_vector( 100-myrank, 1, 150, MPI_INT, &rtype);
140        MPI_Type_commit( &rtype );
141        rptr = &recvarray[0][myrank];
142        MPI_Scatterv(sendbuf, scounts, displs, MPI_INT,
143                     rptr, 1, rtype, root, comm);
144.fi
145.sp
146\fBExample 2:\fR The reverse of Example 1 in the MPI_Gather manpage. The
147root process scatters sets of 100 ints to the other processes, but the sets
148of 100 are stride ints apart in the sending buffer. Requires use of
149MPI_Scatterv, where \fIstride\fP >= 100. 
150.sp
151.nf
152    MPI_Comm comm;
153        int gsize,*sendbuf;
154        int root, rbuf[100], i, *displs, *scounts;
155     
156    \&...
157     
158    MPI_Comm_size(comm, &gsize);
159        sendbuf = (int *)malloc(gsize*stride*sizeof(int));
160        \&...
161        displs = (int *)malloc(gsize*sizeof(int));
162        scounts = (int *)malloc(gsize*sizeof(int));
163        for (i=0; i<gsize; ++i) {
164            displs[i] = i*stride;
165            scounts[i] = 100;
166        }
167        MPI_Scatterv(sendbuf, scounts, displs, MPI_INT,
168                     rbuf, 100, MPI_INT, root, comm);
169.fi
170.SH USE OF IN-PLACE OPTION
171When the communicator is an intracommunicator, you can perform a gather operation in-place (the output buffer is used as the input buffer).  Use the variable MPI_IN_PLACE as the value of the root process \fIrecvbuf\fR.  In this case, \fIrecvcount\fR and \fIrecvtype\fR are ignored, and the root process sends no data to itself.   
172.sp
173Note that MPI_IN_PLACE is a special kind of value; it has the same restrictions on its use as MPI_BOTTOM.
174.sp
175Because the in-place option converts the receive buffer into a send-and-receive buffer, a Fortran binding that includes INTENT must mark these as INOUT, not OUT.   
176.sp
177.SH WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR
178.sp
179When the communicator is an inter-communicator, the root process in the first group sends data to all processes in the second group.  The first group defines the root process.  That process uses MPI_ROOT as the value of its \fIroot\fR argument.  The remaining processes use MPI_PROC_NULL as the value of their \fIroot\fR argument.  All processes in the second group use the rank of that root process in the first group as the value of their \fIroot\fR argument.   The receive buffer argument of the root process in the first group must be consistent with the receive buffer argument of the processes in the second group.   
180.sp 
181.SH ERRORS
182Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI:Exception object.
183.sp
184Before the error value is returned, the current MPI error handler is
185called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error. 
186
187.SH SEE ALSO
188.sp
189.nf
190MPI_Gather
191MPI_Gatherv
192MPI_Scatter
193
194
195
Note: See TracBrowser for help on using the repository browser.