source: proiecte/hpl/openmpi_compiled/share/man/man3/MPI_Type_create_darray.3 @ 97

Last change on this file since 97 was 97, checked in by (none), 14 years ago

Adding compiled files

File size: 5.9 KB
Line 
1.\"Copyright 2006-2008 Sun Microsystems, Inc.
2.\"Copyright (c) 1996 Thinking Machines
3.TH MPI_Type_create_darray 3 "Dec 08, 2009" "1.4" "Open MPI"
4.SH NAME
5\fBMPI_Type_create_darray\fP \- Creates a distributed array datatype;
6
7.SH SYNTAX
8.ft R
9.SH C Syntax
10.nf
11#include <mpi.h>
12int MPI_Type_create_darray(int \fIsize\fP, int \fIrank\fP, int \fIndims\fP,
13        int \fIarray_of_gsizes\fP[], int \fIarray_of_distribs\fP[],
14        int \fIarray_of_dargs\fP[], int \fIarray_of_psizes\fP[],
15        int \fIorder\fP, MPI_Datatype \fIoldtype\fP, MPI_Datatype \fI*newtype\fP)
16
17.SH Fortran Syntax
18.nf
19INCLUDE 'mpif.h'
20MPI_TYPE_CREATE_DARRAY(\fISIZE, RANK, NDIMS, ARRAY_OF_GSIZES,
21        ARRAY_OF_DISTRIBS, ARRAY_OF_DARGS, ARRAY_OF_PSIZES, ORDER,
22        OLDTYPE, NEWTYPE, IERROR\fP)
23
24        INTEGER \fISIZE, RANK, NDIMS, ARRAY_OF_GSIZES(*), ARRAY_OF_DISTRIBS(*),
25                ARRAY_OF_DARGS(*), ARRAY_OF_PSIZES(*), ORDER, OLDTYPE,
26                NEWTYPE, IERROR\fP
27
28.SH C++ Syntax
29.nf
30#include <mpi.h>
31MPI::Datatype MPI::Datatype::Create_darray(int \fIsize\fP, int \fIrank\fP,
32        int \fIndims\fP, const int \fIarray_of_gsizes\fP[],
33        const int \fIarray_of_distribs\fP[], const int \fIarray_of_dargs\fP[],
34        const int \fIarray_of_psizes\fP[], int \fIorder\fP) const
35
36.SH INPUT PARAMETERS
37.ft R
38.TP 1i
39size
40Size of process group (positive integer).
41.TP 1i
42rank
43Rank in process group (nonnegative integer).
44.TP 1i
45ndims
46Number of array dimensions as well as process grid dimensions (positive integer).
47.sp
48.TP 1i
49array_of_gsizes
50Number of elements of type \fIoldtype\fP in each dimension of global array (array of positive integers).
51.sp
52.TP 1i
53array_of_distribs
54Distribution of array in each dimension (array of state).
55.TP 1i
56array_of_dargs
57Distribution argument in each dimension (array of positive integers).
58.sp
59.TP 1i
60array_of_psizes
61Size of process grid in each dimension (array of positive integers).
62.sp
63.TP 1i
64order
65Array storage order flag (state).
66.TP 1i
67oldtype
68Old data type (handle).
69
70.SH OUTPUT PARAMETERS
71.ft R
72.TP 1i
73newtype
74New data type (handle).
75.TP 1i
76IERROR
77Fortran only: Error status (integer).
78
79.SH DESCRIPTION
80.ft R
81
82MPI_Type_create_darray can be used to generate the data types corresponding to the distribution of an ndims-dimensional array of \fIoldtype\fP elements onto an \fIndims\fP-dimensional grid of logical processes. Unused dimensions of \fIarray_of_psizes\fP should be set to 1. For a call to MPI_Type_create_darray to be correct, the equation
83.sp
84.nf
85    \fIndims\fP-1
86  pi              \fIarray_of_psizes[i]\fP = \fIsize\fP
87    \fIi\fP=0
88
89.fi
90.sp
91must be satisfied. The ordering of processes in the process grid is assumed to be row-major, as in the case of virtual Cartesian process topologies in MPI-1.
92.sp
93Each dimension of the array can be distributed in one of three ways:   
94.sp
95.nf
96- MPI_DISTRIBUTE_BLOCK - Block distribution   
97- MPI_DISTRIBUTE_CYCLIC - Cyclic distribution   
98- MPI_DISTRIBUTE_NONE - Dimension not distributed.
99.fi
100.sp
101The constant MPI_DISTRIBUTE_DFLT_DARG specifies a default distribution argument. The distribution argument for a dimension that is not distributed is ignored. For any dimension \fIi\fP in which the distribution is MPI_DISTRIBUTE_BLOCK, it erroneous to specify \fIarray_of_dargs[i]\fP \fI*\fP \fIarray_of_psizes[i]\fP < \fIarray_of_gsizes[i]\fP.
102.sp
103For example, the HPF layout ARRAY(CYCLIC(15)) corresponds to MPI_DISTRIBUTE_CYCLIC with a distribution argument of 15, and the HPF layout ARRAY(BLOCK) corresponds to MPI_DISTRIBUTE_BLOCK with a distribution argument of MPI_DISTRIBUTE_DFLT_DARG.
104.sp
105The \fIorder\fP argument is used as in MPI_TYPE_CREATE_SUBARRAY to specify the storage order. Therefore, arrays described by this type constructor may be stored in Fortran (column-major) or C (row-major) order. Valid values for order are MPI_ORDER_FORTRAN and MPI_ORDER_C.
106.sp
107This routine creates a new MPI data type with a typemap defined in terms of a function called "cyclic()" (see below).
108.sp
109Without loss of generality, it suffices to define the typemap for the MPI_DISTRIBUTE_CYCLIC case where MPI_DISTRIBUTE_DFLT_DARG is not used.
110.sp
111MPI_DISTRIBUTE_BLOCK and MPI_DISTRIBUTE_NONE can be reduced to the MPI_DISTRIBUTE_CYCLIC case for dimension \fIi\fP as follows.
112.sp
113MPI_DISTRIBUTE_BLOCK with \fIarray_of_dargs[i]\fP equal to MPI_DISTRIBUTE_DFLT_DARG is equivalent to MPI_DISTRIBUTE_CYCLIC with \fIarray_of_dargs[i]\fP set to
114.sp
115.nf
116   (\fIarray_of_gsizes[i]\fP + \fIarray_of_psizes[i]\fP - 1)/\fIarray_of_psizes[i]\fP
117.fi
118.sp
119If \fIarray_of_dargs[i]\fP is not MPI_DISTRIBUTE_DFLT_DARG, then MPI_DISTRIBUTE_BLOCK and DISTRIBUTE_CYCLIC are equivalent.
120.sp
121MPI_DISTRIBUTE_NONE is equivalent to MPI_DISTRIBUTE_CYCLIC with \fIarray_of_dargs[i]\fP set to \fIarray_of_gsizes[i]\fP.
122.sp
123Finally, MPI_DISTRIBUTE_CYCLIC with \fIarray_of_dargs[i]\fP equal to MPI_DISTRIBUTE_DFLT_DARG is equivalent to MPI_DISTRIBUTE_CYCLIC with \fIarray_of_dargs[i]\fP set to 1.
124.sp
125
126.SH NOTES
127.ft R
128For both Fortran and C arrays, the ordering of processes in the process grid is assumed to be row-major. This is consistent with the ordering used in virtual Cartesian process topologies in MPI-1. To create such virtual process topologies, or to find the coordinates of a process in the process grid, etc., users may use the corresponding functions provided in MPI-1.
129
130.SH ERRORS
131Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI:Exception object.
132.sp
133Before the error value is returned, the current MPI error handler is
134called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error. 
135
136
Note: See TracBrowser for help on using the repository browser.