1 | .\"Copyright 2007-2008 Sun Microsystems, Inc. |
---|
2 | .\" Copyright (c) 1996 Thinking Machines Corporation |
---|
3 | .TH MPI_Publish_name 3 "Dec 08, 2009" "1.4" "Open MPI" |
---|
4 | |
---|
5 | .SH NAME |
---|
6 | .nf |
---|
7 | \fBMPI_Publish_name\fP \- Publishes a service name associated with a port |
---|
8 | |
---|
9 | .SH SYNTAX |
---|
10 | .ft R |
---|
11 | |
---|
12 | .SH C Syntax |
---|
13 | .nf |
---|
14 | #include <mpi.h> |
---|
15 | int MPI_Publish_name(char *\fIservice_name\fP, MPI_Info \fIinfo\fP, |
---|
16 | char *\fIport_name\fP) |
---|
17 | |
---|
18 | .SH Fortran Syntax |
---|
19 | .nf |
---|
20 | INCLUDE 'mpif.h' |
---|
21 | MPI_PUBLISH_NAME(\fISERVICE_NAME, INFO, PORT_NAME, IERROR\fP) |
---|
22 | CHARACTER*(*) \fISERVICE_NAME, PORT_NAME\fP |
---|
23 | INTEGER \fIINFO, IERROR\fP |
---|
24 | |
---|
25 | .SH C++ Syntax |
---|
26 | .nf |
---|
27 | #include <mpi.h> |
---|
28 | void MPI::Publish_name(const char* \fIservice_name\fP, const MPI::Info& \fIinfo\fP, |
---|
29 | const char* \fIport_name\fP) |
---|
30 | |
---|
31 | .SH INPUT PARAMETERS |
---|
32 | .ft R |
---|
33 | .TP 1.4i |
---|
34 | service_name |
---|
35 | A service name (string). |
---|
36 | .TP 1.4i |
---|
37 | info |
---|
38 | Options to the name service functions (handle). |
---|
39 | .ft R |
---|
40 | .TP 1.4i |
---|
41 | port_name |
---|
42 | A port name (string). |
---|
43 | |
---|
44 | .SH OUTPUT PARAMETER |
---|
45 | .TP 1.4i |
---|
46 | IERROR |
---|
47 | Fortran only: Error status (integer). |
---|
48 | |
---|
49 | .SH DESCRIPTION |
---|
50 | .ft R |
---|
51 | This routine publishes the pair (\fIservice_name, port_name\fP) so that |
---|
52 | an application may retrieve \fIport_name\fP by calling MPI_Lookup_name |
---|
53 | with \fIservice_name\fP as an argument. It is an error to publish the same |
---|
54 | \fIservice_name\fP twice, or to use a \fIport_name\fP argument that was |
---|
55 | not previously opened by the calling process via a call to MPI_Open_port. |
---|
56 | |
---|
57 | .SH INFO ARGUMENTS |
---|
58 | The following keys for \fIinfo\fP are recognized: |
---|
59 | .sp |
---|
60 | .sp |
---|
61 | .nf |
---|
62 | Key Type Description |
---|
63 | --- ---- ----------- |
---|
64 | |
---|
65 | ompi_global_scope bool If set to true, publish the name in |
---|
66 | the global scope. Publish in the local |
---|
67 | scope otherwise. See the NAME SCOPE |
---|
68 | section for more details. |
---|
69 | .fi |
---|
70 | |
---|
71 | .sp |
---|
72 | \fIbool\fP info keys are actually strings but are evaluated as |
---|
73 | follows: if the string value is a number, it is converted to an |
---|
74 | integer and cast to a boolean (meaning that zero integers are false |
---|
75 | and non-zero values are true). If the string value is |
---|
76 | (case-insensitive) "yes" or "true", the boolean is true. If the |
---|
77 | string value is (case-insensitive) "no" or "false", the boolean is |
---|
78 | false. All other string values are unrecognized, and therefore false. |
---|
79 | .PP |
---|
80 | If no info key is provided, the function will first check to see if a |
---|
81 | global server has been specified and is available. If so, then the |
---|
82 | publish function will default to global scope first, followed by local. Otherwise, |
---|
83 | the data will default to publish with local scope. |
---|
84 | |
---|
85 | .SH NAME SCOPE |
---|
86 | Open MPI supports two name scopes: \fIglobal\fP and \fIlocal\fP. Local scope will |
---|
87 | place the specified service/port pair in a data store located on the |
---|
88 | mpirun of the calling process' job. Thus, data published with local |
---|
89 | scope will only be accessible to processes in jobs spawned by that |
---|
90 | mpirun - e.g., processes in the calling process' job, or in jobs |
---|
91 | spawned via MPI_Comm_spawn. |
---|
92 | .sp |
---|
93 | Global scope places the specified service/port pair in a data store |
---|
94 | located on a central server that is accessible to all jobs running |
---|
95 | in the cluster or environment. Thus, data published with global |
---|
96 | scope can be accessed by multiple mpiruns and used for MPI_Comm_Connect |
---|
97 | and MPI_Comm_accept between jobs. |
---|
98 | .sp |
---|
99 | Note that global scope operations require both the presence of the |
---|
100 | central server and that the calling process be able to communicate |
---|
101 | to that server. MPI_Publish_name will return an error if global |
---|
102 | scope is specified and a global server is either not specified or |
---|
103 | cannot be found. |
---|
104 | .sp |
---|
105 | Open MPI provides a server called \fIompi-server\fP to support global |
---|
106 | scope operations. Please refer to its manual page for a more detailed |
---|
107 | description of data store/lookup operations. |
---|
108 | .sp |
---|
109 | As an example of the impact of these scoping rules, consider the case |
---|
110 | where a job has been started with |
---|
111 | mpirun - call this job "job1". A process in job1 creates and publishes |
---|
112 | a service/port pair using a local scope. Open MPI will store this |
---|
113 | data in the data store within mpirun. |
---|
114 | .sp |
---|
115 | A process in job1 (perhaps the same as did the publish, or perhaps |
---|
116 | some other process in the job) subsequently calls MPI_Comm_spawn to |
---|
117 | start another job (call it "job2") under this mpirun. Since the two |
---|
118 | jobs share a common mpirun, both jobs have access to local scope data. Hence, |
---|
119 | a process in job2 can perform an MPI_Lookup_name with a local scope |
---|
120 | to retrieve the information. |
---|
121 | .sp |
---|
122 | However, assume another user starts a job using mpirun - call |
---|
123 | this job "job3". Because the service/port data published by job1 specified |
---|
124 | local scope, processes in job3 cannot access that data. In contrast, if the |
---|
125 | data had been published using global scope, then any process in job3 could |
---|
126 | access the data, provided that mpirun was given knowledge of how to contact |
---|
127 | the central server and the process could establish communication |
---|
128 | with it. |
---|
129 | |
---|
130 | .SH ERRORS |
---|
131 | .ft R |
---|
132 | Almost all MPI routines return an error value; C routines as |
---|
133 | the value of the function and Fortran routines in the last argument. C++ |
---|
134 | functions do not return errors. If the default error handler is set to |
---|
135 | MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism |
---|
136 | will be used to throw an MPI:Exception object. |
---|
137 | .sp |
---|
138 | Before the error value is returned, the current MPI error handler is |
---|
139 | called. By default, this error handler aborts the MPI job, except for |
---|
140 | I/O function errors. The error handler may be changed with |
---|
141 | MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN |
---|
142 | may be used to cause error values to be returned. Note that MPI does not |
---|
143 | guarantee that an MPI program can continue past an error. |
---|
144 | .sp |
---|
145 | See the MPI man page for a full list of MPI error codes. |
---|
146 | |
---|
147 | .SH SEE ALSO |
---|
148 | .ft R |
---|
149 | .nf |
---|
150 | MPI_Lookup_name |
---|
151 | MPI_Open_port |
---|
152 | |
---|
153 | |
---|