source: proiecte/hpl/openmpi_compiled/share/man/man3/MPI_Accumulate.3 @ 97

Last change on this file since 97 was 97, checked in by (none), 14 years ago

Adding compiled files

File size: 5.0 KB
Line 
1.\"Copyright 2006-2008 Sun Microsystems, Inc.
2.\" Copyright (c) 1996 Thinking Machines Corporation
3.TH MPI_Accumulate 3 "Dec 08, 2009" "1.4" "Open MPI"
4.SH NAME
5\fBMPI_Accumulate \fP \- Combines the contents of the origin buffer with that of a target buffer.
6
7.SH SYNTAX
8.ft R
9.SH C Syntax
10.nf
11#include <mpi.h>
12int MPI_Accumulate(void *\fIorigin_addr\fP, int \fIorigin_count\fP,
13        MPI_Datatype \fIorigin_datatype\fP, int \fItarget_rank\fP,
14        MPI_Aint \fItarget_disp\fP, int \fItarget_count\fP,
15        MPI_Datatype \fItarget_datatype\fP, MPI_Op \fIop\fP, MPI_Win \fIwin\fP)
16
17.SH Fortran Syntax (see FORTRAN 77 NOTES)
18.nf
19INCLUDE 'mpif.h'
20MPI_ACCUMULATE(\fIORIGIN_ADDR, ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK,
21        TARGET_DISP, TARGET_COUNT, TARGET_DATATYPE, OP, WIN, IERROR\fP)
22        <type> \fIORIGIN_ADDR\fP(*)
23        INTEGER(KIND=MPI_ADDRESS_KIND) \fITARGET_DISP\fP
24        INTEGER \fIORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK, TARGET_COUNT,
25        TARGET_DATATYPE, OP, WIN, IERROR \fP
26
27.SH C++ Syntax
28.nf
29#include <mpi.h>
30void MPI::Win::Accumulate(const void* \fIorigin_addr\fP, int \fIorigin_count\fP,
31        const MPI::Datatype& \fIorigin_datatype\fP, int \fItarget_rank\fP,
32        MPI::Aint \fItarget_disp\fP, int \fItarget_count\fP, const MPI::Datatype&
33        \fItarget_datatype\fP, const MPI::Op& \fIop\fP) const
34
35.SH INPUT PARAMETERS
36.ft R
37.TP 1i
38origin_addr
39Initial address of buffer (choice).
40.ft R
41.TP 1i
42origin_count
43Number of entries in buffer (nonnegative integer).
44.ft R
45.TP 1i
46origin_datatype
47Data type of each buffer entry (handle).
48.ft R
49.TP 1i
50target_rank
51Rank of target (nonnegative integer).
52.ft R
53.TP 1i
54target_disp
55Displacement from start of window to beginning of target buffer (nonnegative integer).
56.ft R
57.TP 1i
58target_count
59Number of entries in target buffer (nonnegative integer).
60.ft R
61.TP 1i
62target_datatype
63Data type of each entry in target buffer (handle).
64.ft R
65.TP 1i
66op
67Reduce operation (handle).
68.ft R
69.TP 1i
70win
71Window object (handle).
72
73.SH OUTPUT PARAMETER
74.ft R
75.TP 1i
76IERROR
77Fortran only: Error status (integer).
78
79.SH DESCRIPTION
80.ft R
81MPI_Accumulate is a function used for one-sided MPI communication that adds the contents of the origin buffer (as defined by \fIorigin_addr\fP, \fIorigin_count\fP, and \fIorigin_datatype\fP) to the buffer specified by the arguments \fItarget_count\fP and \fItarget_datatype\fP, at offset \fItarget_disp\fP, in the target window specified by \fItarget_rank\fP and \fIwin\fP, using the operation \fIop\fP. The target window can only be accessed by processes within the same node. This is similar to MPI_Put, except that data is combined into the target area instead of overwriting it.
82.sp
83Any of the predefined operations for MPI_Reduce can be used. User-defined functions cannot be used. For example, if \fIop\fP is MPI_SUM, each element of the origin buffer is added to the corresponding element in the target, replacing the former value in the target.
84.sp
85Each datatype argument must be a predefined data type or a derived data type, where all basic components are of the same predefined data type. Both datatype arguments must be constructed from the same predefined data type. The operation \fIop\fP applies to elements of that predefined type. The \fItarget_datatype\fP argument must not specify overlapping entries, and the target buffer must fit in the target window.
86.sp
87A new predefined operation, MPI_REPLACE, is defined. It corresponds to the associative function f(a, b) =b; that is, the current value in the target memory is replaced by the value supplied by the origin.
88
89
90.SH FORTRAN 77 NOTES
91.ft R
92The MPI standard prescribes portable Fortran syntax for
93the \fITARGET_DISP\fP argument only for Fortran 90.  FORTRAN 77
94users may use the non-portable syntax
95.sp
96.nf
97     INTEGER*MPI_ADDRESS_KIND \fITARGET_DISP\fP
98.fi
99.sp
100where MPI_ADDRESS_KIND is a constant defined in mpif.h
101and gives the length of the declared integer in bytes.
102
103.SH NOTES
104MPI_Put is a special case of MPI_Accumulate, with the operation MPI_REPLACE. Note, however, that MPI_Put and MPI_Accumulate have different constraints on concurrent updates.
105.sp
106It is the user's responsibility to guarantee that, when
107using the accumulate functions, the target displacement argument is such
108that accesses to the window are properly aligned according to the data
109type arguments in the call to the MPI_Accumulate function.
110
111.SH ERRORS
112Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI:Exception object.
113.sp
114Before the error value is returned, the current MPI error handler is
115called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler
116may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.
117
118.SH SEE ALSO
119.ft R
120.sp
121MPI_Put
122.br
123MPI_Reduce
Note: See TracBrowser for help on using the repository browser.