[97] | 1 | # -*- text -*- |
---|
| 2 | # |
---|
| 3 | # Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana |
---|
| 4 | # University Research and Technology |
---|
| 5 | # Corporation. All rights reserved. |
---|
| 6 | # Copyright (c) 2004-2005 The University of Tennessee and The University |
---|
| 7 | # of Tennessee Research Foundation. All rights |
---|
| 8 | # reserved. |
---|
| 9 | # Copyright (c) 2004-2005 High Performance Computing Center Stuttgart, |
---|
| 10 | # University of Stuttgart. All rights reserved. |
---|
| 11 | # Copyright (c) 2004-2005 The Regents of the University of California. |
---|
| 12 | # All rights reserved. |
---|
| 13 | # Copyright (c) 2007 Cisco Systems, Inc. All rights reserved. |
---|
| 14 | # $COPYRIGHT$ |
---|
| 15 | # |
---|
| 16 | # Additional copyrights may follow |
---|
| 17 | # |
---|
| 18 | # $HEADER$ |
---|
| 19 | # |
---|
| 20 | # This is the US/English general help file for Open MPI. |
---|
| 21 | # |
---|
| 22 | [mpi_init:startup:internal-failure] |
---|
| 23 | It looks like %s failed for some reason; your parallel process is |
---|
| 24 | likely to abort. There are many reasons that a parallel process can |
---|
| 25 | fail during %s; some of which are due to configuration or environment |
---|
| 26 | problems. This failure appears to be an internal failure; here's some |
---|
| 27 | additional information (which may only be relevant to an Open MPI |
---|
| 28 | developer): |
---|
| 29 | |
---|
| 30 | %s |
---|
| 31 | --> Returned "%s" (%d) instead of "Success" (0) |
---|
| 32 | [mpi-param-check-enabled-but-compiled-out] |
---|
| 33 | WARNING: The MCA parameter mpi_param_check has been set to true, but |
---|
| 34 | parameter checking has been compiled out of Open MPI. The |
---|
| 35 | mpi_param_check value has therefore been ignored. |
---|
| 36 | [mpi-params:leave-pinned-and-pipeline-selected] |
---|
| 37 | WARNING: Cannot set both the MCA parameters mpi_leave_pinned and |
---|
| 38 | mpi_leave_pinned_pipeline to "true". Defaulting to mpi_leave_pinned |
---|
| 39 | ONLY. |
---|
| 40 | [mpi_init:startup:paffinity-unavailable] |
---|
| 41 | The MCA parameter "mpi_paffinity_alone" was set to a nonzero value, |
---|
| 42 | but Open MPI was unable to bind MPI_COMM_WORLD rank %s to a processor. |
---|
| 43 | |
---|
| 44 | Typical causes for this problem include: |
---|
| 45 | |
---|
| 46 | - A node was oversubscribed (more processes than processors), in |
---|
| 47 | which case Open MPI will not bind any processes on that node |
---|
| 48 | - A startup mechanism was used which did not tell Open MPI which |
---|
| 49 | processors to bind processes to |
---|
| 50 | [mpi_finalize:invoked_multiple_times] |
---|
| 51 | The function MPI_FINALIZE was invoked multiple times in a single |
---|
| 52 | process on host %s, PID %d. |
---|
| 53 | |
---|
| 54 | This indicates an erroneous MPI program; MPI_FINALIZE is only allowed |
---|
| 55 | to be invoked exactly once in a process. |
---|
| 56 | [proc:heterogeneous-support-unavailable] |
---|
| 57 | The build of Open MPI running on host %s was not |
---|
| 58 | compiled with heterogeneous support. A process running on host |
---|
| 59 | %s appears to have a different architecture, |
---|
| 60 | which will not work. Please recompile Open MPI with the |
---|
| 61 | configure option --enable-heterogeneous or use a homogeneous |
---|
| 62 | environment. |
---|
| 63 | # |
---|
| 64 | [sparse groups enabled but compiled out] |
---|
| 65 | WARNING: The MCA parameter mpi_use_sparse_group_storage has been set |
---|
| 66 | to true, but sparse group support was not compiled into Open MPI. The |
---|
| 67 | mpi_use_sparse_group_storage value has therefore been ignored. |
---|
| 68 | # |
---|
| 69 | [heterogeneous-support-unavailable] |
---|
| 70 | This installation of Open MPI was configured without support for |
---|
| 71 | heterogeneous architectures, but at least one node in the allocation |
---|
| 72 | was detected to have a different architecture. The detected node was: |
---|
| 73 | |
---|
| 74 | Node: %s |
---|
| 75 | |
---|
| 76 | In order to operate in a heterogeneous environment, please reconfigure |
---|
| 77 | Open MPI with --enable-heterogeneous. |
---|
| 78 | # |
---|
| 79 | [mpi_init:warn-fork] |
---|
| 80 | An MPI process has executed an operation involving a call to the |
---|
| 81 | "fork()" system call to create a child process. Open MPI is currently |
---|
| 82 | operating in a condition that could result in memory corruption or |
---|
| 83 | other system errors; your MPI job may hang, crash, or produce silent |
---|
| 84 | data corruption. The use of fork() (or system() or other calls that |
---|
| 85 | create child processes) is strongly discouraged. |
---|
| 86 | |
---|
| 87 | The process that invoked fork was: |
---|
| 88 | |
---|
| 89 | Local host: %s (PID %d) |
---|
| 90 | MPI_COMM_WORLD rank: %d |
---|
| 91 | |
---|
| 92 | If you are *absolutely sure* that your application will successfully |
---|
| 93 | and correctly survive a call to fork(), you may disable this warning |
---|
| 94 | by setting the mpi_warn_on_fork MCA parameter to 0. |
---|