jpekkila
d0ca1f8195
Reduction types are now generated with acc instead of being explicitly declared in astaroth.h
2020-06-28 18:16:19 +03:00
jpekkila
852fae17cf
Added a function for getting the GPU count from fortran
2020-06-28 18:15:40 +03:00
jpekkila
50fb54f1aa
Added more warnings since its easy to make off-by-one mistakes when dealing with fortran-c-interop
2020-06-28 18:14:54 +03:00
jpekkila
e764725564
acUpdateBuiltinParams now recalculates AC_inv_dsx and others if necessary
2020-06-26 09:54:17 +03:00
jpekkila
6f59890a3f
Added loading and storing functions to the fortran interface
2020-06-26 09:52:33 +03:00
jpekkila
ee4b18c81c
Merge branch 'mpi-to-master-merge-candidate-2020-06-01' of https://bitbucket.org/jpekkila/astaroth into mpi-to-master-merge-candidate-2020-06-01
2020-06-25 20:40:24 +03:00
jpekkila
39c7fc6c6f
Streams are now generated with acc
2020-06-25 20:40:02 +03:00
jpekkila
7e71e32359
Fortran does not seem to really support arrays of pointers, better to modify the interface function to take the f array as an input and use it in C to costruct a proper AcMesh
2020-06-25 20:21:16 +03:00
jpekkila
1b50374cdb
Added the rest of the basic functions required for running simulations with the fortran interface
2020-06-25 20:09:35 +03:00
jpekkila
0a19192004
Auto-optimization was not on for all GPUs when using MPI. May have to rerun all benchmarks for the MPI paper.
2020-06-25 19:53:39 +03:00
jpekkila
225c660e0d
Merge branch 'mpi-to-master-merge-candidate-2020-06-01' of https://bitbucket.org/jpekkila/astaroth into mpi-to-master-merge-candidate-2020-06-01
2020-06-25 06:44:54 +03:00
jpekkila
172ffc34dc
Was missing another fortran file, added
2020-06-25 06:44:27 +03:00
jpekkila
264abddefb
bitbucket-pipelines.yml edited online with Bitbucket
2020-06-25 03:41:23 +00:00
jpekkila
f11c5b84fb
Forgot the actual interface from previous commits, here it is
2020-06-25 06:36:00 +03:00
jpekkila
c44c3d02b4
Added a sample for testing the Fortran interface
2020-06-25 06:35:13 +03:00
jpekkila
fbb8d7c7c6
Added a minimal Fortran interface to Astaroth
2020-06-25 06:34:16 +03:00
jpekkila
70ecacee7c
Reverted the default build options to what they were before merging (again)
2020-06-24 17:04:35 +03:00
jpekkila
196edac46d
Added proper casts to modelsolver.c
2020-06-24 17:03:54 +03:00
jpekkila
c0c337610b
Added mpi_reduce_bench to samples
2020-06-24 16:42:39 +03:00
jpekkila
fab620eb0d
Reordered reduction autotests and made it so that the exact same mesh is used for both the model and candidates instead of the unclean integrated one
2020-06-24 16:34:50 +03:00
jpekkila
ba0bfd65b4
Merged the new reduction functions manually
2020-06-24 16:10:27 +03:00
jpekkila
ff1a601f85
Merged mpi-to-master-merge-candidate-2020-06-01 here
2020-06-24 16:08:14 +03:00
jpekkila
0d1c5b3911
Autoformatted
2020-06-24 15:56:30 +03:00
jpekkila
3c3b2a1885
Reverted the default settings to what they were before merge. Note: LFORCING (1) is potentially not tested properly, TODO recheck.
2020-06-24 15:35:19 +03:00
jpekkila
88f99c12e4
Fixed #fi -> #endif
2020-06-24 15:20:43 +03:00
jpekkila
f04e347c45
Cleanup before merging to the master merge candidate branch
2020-06-24 15:13:15 +03:00
jpekkila
0e4b39d6d7
Added a toggle for using pinned memory
2020-06-11 11:28:52 +03:00
Oskar Lappi
0030db01f3
Automatic calculation of nodes based on processes
2020-06-10 16:51:35 +03:00
jpekkila
1cdb9e2ce7
Added missing synchronization to the end of the new integration function
2020-06-10 12:32:56 +03:00
jpekkila
fa422cf457
Added a better-pipelined version of the acGridIntegrate and a switch for toggling the transfer of corners
2020-06-10 02:16:23 +03:00
Oskar Lappi
c7f23eb50c
Added partition argument to mpibench script
2020-06-09 14:07:37 +03:00
jpekkila
9840b817d0
Added the (hopefully final) basic test case used for the benchmarks
2020-06-07 21:59:33 +03:00
Oskar Lappi
cd49db68d7
No barrier benchmark
2020-06-07 15:50:49 +03:00
Oskar Lappi
53b48bb8ce
MPI_Allreduce -> MPI_Reduce for MPI reductions + benchmark batch script
...
Slightly ugly because this changes the benchmark behaviour slightly
However we now have a way to run batch benchmarks from one script, no need to generate new ones
2020-06-06 22:56:05 +03:00
Oskar Lappi
eb05e02793
Added vector reductions to mpi reduction benchmarks
2020-06-06 19:25:30 +03:00
Oskar Lappi
666f01a23d
Benchmarking program for scalar mpi reductions, and nonbatch script for running benchmarks
...
- New program mpi_reduce_bench
- runs testcases defined in source
- writes all benchmark results to a csv file, tags the testcase and benchmark run
- takes optional argument for benchmark tag, default benchmark tag is a timestamp
- New script mpibench.sh
- runs the mpi_reduce_bench with defined parameters:
- number of tasks
- number of nodes
- the benchmark tag for mpi_reduce_bench, default tag is the current git HEAD short hash
2020-06-05 19:48:40 +03:00
jpekkila
17a4f31451
Added the latest setup used for benchmarks
2020-06-04 20:47:03 +03:00
Oskar Lappi
9e5fd40838
Changes after code review by Johannes, and clang-format
2020-06-04 18:50:22 +03:00
Oskar Lappi
f7d8de75d2
Reduction test pipeline added to mpitest, Error struct changed: new label field
...
- CHANGED: Error struct has a new label field for labeling an error
- The label is what is printed to screen
- vtxbuf name lookup moved out of printErrorToScreen/print_error_to_screen
- NEW: acScalReductionTestCase and acVecReductionTestCase
- Define new test cases by adding them to a list in samples/mpitest/main.cc:main
- Minor style change in verification.c to make all Verification functions similar
and fit one screen
2020-06-04 15:10:35 +03:00
jpekkila
226de32651
Added model solution for reductions and functions for automated testing
2020-06-03 13:37:00 +03:00
Oskar Lappi
34793d4e8b
Changes after code review with Johannes
2020-06-03 12:44:43 +03:00
Oskar Lappi
899d679518
Draft of MPI-based reductions acGridReduceScal, acGridReduceVec
...
- Calls acDeviceReduceScal/Vec first
- Both functions then perform the same MPI-reduction (MPI_Allreduce)
- Not tested
2020-06-02 21:59:30 +03:00
jpekkila
0d80834619
Disabled forcing and upwinding for performance tests. Set default grid size to 512^3. Set default cmake params s.t. benchmarks can be reproduced out-of-the-box.
2020-06-02 14:09:00 +03:00
jpekkila
a753ca92f2
Made cmake handle MPI linking. Potentially a bad idea (usually better to use mpicc and mpicxx wrappers)
2020-05-30 22:02:39 +03:00
jpekkila
f97ed9e513
For reason X git decided to remove integration from the most critical part of the program when merging. Luckily we have autotests.
2020-05-30 20:59:39 +03:00
jpekkila
9cafe88d13
Merge branch 'mpi-to-master-merge-candidate-2020-06-01' of https://bitbucket.org/jpekkila/astaroth into mpi-to-master-merge-candidate-2020-06-01
2020-05-30 20:25:48 +03:00
jpekkila
176ceae313
Fixed various compilation warnings
2020-05-30 20:23:53 +03:00
jpekkila
2ddeef22ac
bitbucket-pipelines.yml edited online with Bitbucket
2020-05-30 16:58:45 +00:00
jpekkila
f929b21ac0
bitbucket-pipelines.yml edited online with Bitbucket
2020-05-30 16:52:26 +00:00
jpekkila
95275df3f2
bitbucket-pipelines.yml edited online with Bitbucket
2020-05-30 16:48:39 +00:00