source: proiecte/hpl/openmpi_compiled/share/man/man3/MPI_Alltoallv.3 @ 97

Last change on this file since 97 was 97, checked in by (none), 14 years ago

Adding compiled files

File size: 5.7 KB
Line 
1.\"Copyright 2006-2008 Sun Microsystems, Inc.
2.\" Copyright (c) 1996 Thinking Machines Corporation
3.TH MPI_Alltoallv 3 "Dec 08, 2009" "1.4" "Open MPI"
4
5.SH NAME
6\fBMPI_Alltoallv\fP \- All processes send different amount of data to, and receive different amount of data from, all processes
7.SH SYNTAX
8.ft R
9
10.SH C Syntax
11.nf
12#include <mpi.h>
13int MPI_Alltoallv(void *\fIsendbuf\fP, int *\fIsendcounts\fP,
14        int *\fIsdispls\fP, MPI_Datatype \fIsendtype\fP,
15        void *\fIrecvbuf\fP, int\fI *recvcounts\fP,
16        int *\fIrdispls\fP, MPI_Datatype \fIrecvtype\fP, MPI_Comm \fIcomm\fP)
17
18.SH Fortran Syntax
19.nf
20INCLUDE 'mpif.h'
21
22MPI_ALLTOALLV(\fISENDBUF, SENDCOUNTS, SDISPLS, SENDTYPE,
23        RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPE, COMM, IERROR\fP)
24
25        <type>  \fISENDBUF(*), RECVBUF(*)\fP
26        INTEGER \fISENDCOUNTS(*), SDISPLS(*), SENDTYPE\fP
27        INTEGER \fIRECVCOUNTS(*), RDISPLS(*), RECVTYPE\fP
28        INTEGER \fICOMM, IERROR\fP
29
30.SH C++ Syntax
31.nf
32#include <mpi.h>
33void MPI::Comm::Alltoallv(const void* \fIsendbuf\fP,
34        const int \fIsendcounts\fP[], const int \fIdispls\fP[],
35        const MPI::Datatype& \fIsendtype\fP, void* \fIrecvbuf\fP,
36        const int \fIrecvcounts\fP[], const int \fIrdispls\fP[],
37        const MPI::Datatype& \fIrecvtype\fP)
38
39.SH INPUT PARAMETERS
40.ft R
41.TP 1.2i
42sendbuf
43Starting address of send buffer.
44.TP 1.2i
45sendcounts
46Integer array, where entry i specifies the number of elements to send
47to rank i.
48.TP 1.2i
49sdispls
50Integer array, where entry i specifies the displacement (offset from
51\fIsendbuf\fP, in units of \fIsendtype\fP) from which to send data to
52rank i.
53.TP 1.2i
54sendtype
55Datatype of send buffer elements.
56.TP 1.2i
57recvcounts
58Integer array, where entry j specifies the number of elements to
59receive from rank j.
60.TP 1.2i
61rdispls
62Integer array, where entry j specifies the displacement (offset from
63\fIrecvbuf\fP, in units of \fIrecvtype\fP) to which data from rank j
64should be written.
65.TP 1.2i
66recvtype
67Datatype of receive buffer elements.
68.TP 1.2i
69comm
70Communicator over which data is to be exchanged.
71
72.SH OUTPUT PARAMETERS
73.ft R
74.TP 1.2i
75recvbuf
76Address of receive buffer.
77.ft R
78.TP 1.2i
79IERROR
80Fortran only: Error status.
81
82.SH DESCRIPTION
83.ft R
84MPI_Alltoallv is a generalized collective operation in which all
85processes send data to and receive data from all other processes. It
86adds flexibility to MPI_Alltoall by allowing the user to specify data
87to send and receive vector-style (via a displacement and element
88count). The operation of this routine can be thought of as follows,
89where each process performs 2n (n being the number of processes in
90communicator \fIcomm\fP) independent point-to-point communications
91(including communication with itself).
92.sp
93.nf
94        MPI_Comm_size(\fIcomm\fP, &n);
95        for (i = 0, i < n; i++)
96            MPI_Send(\fIsendbuf\fP + \fIsdispls\fP[i] * extent(\fIsendtype\fP),
97                \fIsendcounts\fP[i], \fIsendtype\fP, i, ..., \fIcomm\fP);
98        for (i = 0, i < n; i++)
99            MPI_Recv(\fIrecvbuf\fP + \fIrdispls\fP[i] * extent(\fIrecvtype\fP),
100                \fIrecvcounts\fP[i], \fIrecvtype\fP, i, ..., \fIcomm\fP);
101.fi
102.sp
103Process j sends the k-th block of its local \fIsendbuf\fP to process
104k, which places the data in the j-th block of its local
105\fIrecvbuf\fP.
106.sp
107When a pair of processes exchanges data, each may pass different
108element count and datatype arguments so long as the sender specifies
109the same amount of data to send (in bytes) as the receiver expects
110to receive.
111.sp
112Note that process i may send a different amount of data to process j
113than it receives from process j. Also, a process may send entirely
114different amounts of data to different processes in the communicator.
115
116.sp
117WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR
118.sp
119When the communicator is an inter-communicator, the gather operation occurs in two phases.  The data is gathered from all the members of the first group and received by all the members of the second group.  Then the data is gathered from all the members of the second group and received by all the members of the first.  The operation exhibits a symmetric, full-duplex behavior. 
120.sp
121The first group defines the root process.  The root process uses MPI_ROOT as the value of \fIroot\fR.  All other processes in the first group use MPI_PROC_NULL as the value of \fIroot\fR.  All processes in the second group use the rank of the root process in the first group as the value of \fIroot\fR.
122.sp
123When the communicator is an intra-communicator, these groups are the same, and the operation occurs in a single phase.
124.sp 
125.SH NOTES
126.ft R
127The MPI_IN_PLACE option is not available for any form of all-to-all
128communication.
129.sp
130The specification of counts and displacements should not cause
131any location to be written more than once.
132.sp
133All arguments on all processes are significant. The \fIcomm\fP argument,
134in particular, must describe the same communicator on all processes.
135.sp
136The offsets of \fIsdispls\fP and \fIrdispls\fP are measured in units
137of \fIsendtype\fP and \fIrecvtype\fP, respectively. Compare this to
138MPI_Alltoallw, where these offsets are measured in bytes.
139
140.SH ERRORS
141.ft R
142Almost all MPI routines return an error value; C routines as
143the value of the function and Fortran routines in the last argument. C++
144functions do not return errors. If the default error handler is set to
145MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
146will be used to throw an MPI:Exception object.
147.sp
148Before the error value is returned, the current MPI error handler is
149called. By default, this error handler aborts the MPI job, except for
150I/O function errors. The error handler may be changed with
151MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN
152may be used to cause error values to be returned. Note that MPI does not
153guarantee that an MPI program can continue past an error.
154
155.SH SEE ALSO
156.ft R
157.nf
158MPI_Alltoall
159MPI_Alltoallw
160
Note: See TracBrowser for help on using the repository browser.