Commit Graph

9 Commits

Author SHA1 Message Date
jpekkila
46cfa9cd37 Now using MPI C bindings instead of the (deprecated?) C++ bindings due to compilation issues on some machines (error: cast between incompatible function types, ompi_mpi_cxx_op_intercept) 2020-08-19 15:50:16 +03:00
jpekkila
0d1c5b3911 Autoformatted 2020-06-24 15:56:30 +03:00
jpekkila
f04e347c45 Cleanup before merging to the master merge candidate branch 2020-06-24 15:13:15 +03:00
jpekkila
176ceae313 Fixed various compilation warnings 2020-05-30 20:23:53 +03:00
jpekkila
9cd5909f5a BWtest calculates now aggregate bandwidths per process instead of assuming that all neighbor communication can be done in parallel (Within a node one can have parallel P2P connections to all neighbors and we have an insane total bandwidth, but this is not the case with network, we seem to have only one bidirectional socket) 2020-04-09 20:28:04 +03:00
jpekkila
d4a84fb887 Added a PCIe bandwidth test 2020-04-09 20:04:54 +03:00
jpekkila
fb41741d74 Improvements to samples 2020-04-07 17:58:47 +03:00
jpekkila
cc9d3f1b9c Found a workaround that gives good inter and intra-node performance. HPC-X MPI implementation does not know how to do p2p comm with pinned arrays (should be 80 GiB/s, measured 10 GiB/s) and internode comm is super slow without pinned arrays (should be 40 GiB/s, measured < 1 GiB/s). Made a proof of concept communicator that pins arrays that are send or received from another node. 2020-04-05 20:15:32 +03:00
jpekkila
88e53dfa21 Added a little program for testing the bandwidths of different MPI comm styles on n nodes and processes 2020-04-05 17:09:57 +03:00