add some bibtex
This commit is contained in:
@@ -19,6 +19,19 @@ Bespoke codes with optimized communication may be non-portable across run-time/s
|
|||||||
This work presents node-aware approaches for automatic data placement and communication implementation for 3D stencil codes on multi-GPU nodes with non-homogeneous communication performance and capabilities.
|
This work presents node-aware approaches for automatic data placement and communication implementation for 3D stencil codes on multi-GPU nodes with non-homogeneous communication performance and capabilities.
|
||||||
Benchmarking results in the Summit system show that choices in placement can result in a 20% improvement in single-node exchange, and communication specialization can yield a further 6x improvement in exchange time in a single node, and a 16% improvement at 1536 GPUs.
|
Benchmarking results in the Summit system show that choices in placement can result in a 20% improvement in single-node exchange, and communication specialization can yield a further 6x improvement in exchange time in a single node, and a 16% improvement at 1536 GPUs.
|
||||||
|
|
||||||
|
```bibtex
|
||||||
|
@INPROCEEDINGS{9150372,
|
||||||
|
author={C. {Pearson} and M. {Hidayetoğlu} and M. {Almasri} and O. {Anjum} and I. {Chung} and J. {Xiong} and W. W. {Hwu}},
|
||||||
|
booktitle={2020 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)},
|
||||||
|
title={Node-Aware Stencil Communication for Heterogeneous Supercomputers},
|
||||||
|
year={2020},
|
||||||
|
volume={},
|
||||||
|
number={},
|
||||||
|
pages={796-805},
|
||||||
|
doi={10.1109/IPDPSW50202.2020.00136}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
* [pdf](/pdf/20200522_pearson_iwapt.pdf)
|
* [pdf](/pdf/20200522_pearson_iwapt.pdf)
|
||||||
* [code](https://github.com/cwpearson/stencil)
|
* [code](https://github.com/cwpearson/stencil)
|
||||||
* [slides](/pdf/20200522_pearson_iwapt_slides.pdf)
|
* [slides](/pdf/20200522_pearson_iwapt_slides.pdf)
|
@@ -19,6 +19,19 @@ Results for the challenge benchmarks show that the proposed kernel design and mu
|
|||||||
These results areup to 4.3x faster for a single GPU and an order of magnitude faster at full scale than those of the champion of the 2019 SparseDeep Neural Network Graph Challenge for the same generation of NVIDIA V100 GPUs.
|
These results areup to 4.3x faster for a single GPU and an order of magnitude faster at full scale than those of the champion of the 2019 SparseDeep Neural Network Graph Challenge for the same generation of NVIDIA V100 GPUs.
|
||||||
Using the same implementation1, we also show single-GPU throughput on NVIDIA A100 is 2.37x faster than V100
|
Using the same implementation1, we also show single-GPU throughput on NVIDIA A100 is 2.37x faster than V100
|
||||||
|
|
||||||
|
```bibtex
|
||||||
|
@INPROCEEDINGS{9286206,
|
||||||
|
author={M. {Hidayetoğlu} and C. {Pearson} and V. S. {Mailthody} and E. {Ebrahimi} and J. {Xiong} and R. {Nagi} and W. -m. {Hwu}},
|
||||||
|
booktitle={2020 IEEE High Performance Extreme Computing Conference (HPEC)},
|
||||||
|
title={At-Scale Sparse Deep Neural Network Inference With Efficient GPU Implementation},
|
||||||
|
year={2020},
|
||||||
|
volume={},
|
||||||
|
number={},
|
||||||
|
pages={1-7},
|
||||||
|
doi={10.1109/HPEC43674.2020.9286206}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
* [pdf](/pdf/20200923_hidayetoglu_hpec.pdf)
|
* [pdf](/pdf/20200923_hidayetoglu_hpec.pdf)
|
||||||
* [code] (https://github.com/merthidayetoglu/sparse-DNN)
|
* [code] (https://github.com/merthidayetoglu/sparse-DNN)
|
||||||
* [slides](/pdf/20200923_hidayetoglu_hpec_slides.pdf)
|
* [slides](/pdf/20200923_hidayetoglu_hpec_slides.pdf)
|
Reference in New Issue
Block a user