Questions tagged [openmpi]

Open MPI is an open source implementation of the Message Passing Interface, a library for distributed memory parallel programming.

The Open MPI Project is an open-source implementation of the Message Passing Interface, a standardized and portable message-passing system designed to leverage to computational power of massively parallel, distributed memory computers.

Message passing is one of the distributed memory models most often used, with MPI being the most used message passing API, offering two types of communication between processes: point-to-point or collective. MPI can run in distributed and shared memory architectures.

An application using MPI consists usually of multiple simultaneously running processes, normally on different CPUs, which are able to communicate with each other. Normally, this type of application is programmed using the SPMD model. Nevertheless, most MPI implementations also support the MPMD model.

More information about the MPI standard may be found on the official MPI Forum website, in the official documentation, and in the Open MPI documentation.

1195 questions
142
votes
5 answers

MPICH vs OpenMPI

Can someone elaborate the differences between the OpenMPI and MPICH implementations of MPI ? Which of the two is a better implementation ?
lava
  • 1,757
  • 2
  • 13
  • 15
48
votes
8 answers

fatal error: mpi.h: No such file or directory #include

when I compile my script with only #include it tells me that there is no such file or directory. But when i include the path to mpi.h as #include "/usr/include/mpi/mpi.h" (the path is correct) it returns: In file included from…
user2804865
  • 816
  • 2
  • 9
  • 15
32
votes
4 answers

How do you check the version of OpenMPI?

I'm compiling my code on a server that has OpenMPI, but I need to know which version I'm on so I can read the proper documentation. Is there a constant in that I can print to display my current version?
Zak
  • 10,506
  • 15
  • 52
  • 90
29
votes
3 answers

When do I need to use MPI_Barrier()?

I wonder when do I need to use barrier? Do I need it before/after a scatter/gather for example? Or should OMPI ensure all processes have reached that point before scatter/gather-ing? Similarly, after a broadcast can I expect all processes to already…
Jiew Meng
  • 74,635
  • 166
  • 442
  • 756
28
votes
2 answers

mpirun - not enough slots available

Usually when I use mpirun, I can "overload" it, using more processors than there acctually are on my computer. For example, on my four-core mac, I can run mpirun -np 29 python -c "print 'hey'" no problem. I'm on another machine now, which is…
kilojoules
  • 7,600
  • 17
  • 63
  • 123
21
votes
1 answer

difference between MPI_Send() and MPI_Ssend()?

I know MPI_Send() is a blocking call ,which waits until it is safe to modify the application buffer for reuse. For making the send call synchronous(there should be a handshake with the receiver) , we need to use MPI_Ssend() . I want to know the…
Ankur Gautam
  • 1,316
  • 4
  • 13
  • 26
19
votes
2 answers

Kubernetes and MPI

I want to run an MPI job on my Kubernetes cluster. The context is that I'm actually running a modern, nicely containerised app but part of the workload is a legacy MPI job which isn't going to be re-written anytime soon, and I'd like to fit it into…
Ben
  • 793
  • 7
  • 19
15
votes
1 answer

MPI_Rank return same process number for all process

I'm trying to run this sample hello world program with openmpi and mpirun on debian 7. #include #include int main (int argc, char **argv) { int nProcId, nProcNo; int nNameLen; char…
hamedkh
  • 849
  • 2
  • 17
  • 34
14
votes
4 answers

How to determine MPI rank/process number local to a socket/node

Say, I run a parallel program using MPI. Execution command mpirun -n 8 -npernode 2 launches 8 processes in total. That is 2 processes per node and 4 nodes in total. (OpenMPI 1.5). Where a node comprises 1 CPU (dual core) and network…
ritter
  • 6,405
  • 7
  • 43
  • 79
13
votes
4 answers

Is it possible to send data from a Fortran program to Python using MPI?

I am working on a tool to model wave energy converters, where I need to couple two software packages to each other. One program is written in Fortran, the other one in C++. I need to send information from the Fortran program to the C++ program at…
13
votes
1 answer

Syntax of the --map-by option in openmpi mpirun v1.8

Looking at the following extract from the openmpi manual --map-by Map to the specified object, defaults to socket. Supported options include slot, hwthread, core, L1cache, L2cache, L3cache, socket, numa, board, node, sequential,…
el_tenedor
  • 604
  • 1
  • 8
  • 19
13
votes
7 answers

DLL load failed: The specified module could not be found when doing "from mpi4py import MPI"

I am trying to use Mpi4py 1.3 with python 2.7 on Windows 7 64bits. I downloaded the installable version from here which includes OpenMPI 1.6.3 so in the installed directory (*/Python27\Lib\site-packages\mpi4py\lib) following libraries exist:…
Aso Agile
  • 337
  • 1
  • 5
  • 13
13
votes
3 answers

Having Open MPI related issues while making CUDA 5.0 samples (Mac OS X ML)

When I'm trying to make CUDA 5.0 samples an error appears: Makefile:79: * MPI not found, not building simpleMPI.. Stop. I've tried to download and build the latest version of Open MPI reffering to Open MPI "FAQ / Platforms / OS X / 6. How do I…
Geradlus_RU
  • 1,337
  • 1
  • 17
  • 33
12
votes
2 answers

Open MPI - mpirun exits with error on simple program

I have recently installed OpenMPI on my computer and when I try to run a simple Hello World program, it exits with the next error: ------------------------------------------------------- Primary job terminated normally, but 1 process returned a…
fenusa0
  • 136
  • 1
  • 1
  • 5
11
votes
1 answer

fault tolerance in MPICH/OpenMPI

I have two questions- Q1. Is there a more efficient way to handle the error situation in MPI, other than check-point/rollback? I see that if a node "dies", the program halts abruptly.. Is there any way to go ahead with the execution after a node…
Param
  • 197
  • 1
  • 7
1
2 3
79 80