publications, experiment with experience list

This commit is contained in:
Carl Pearson
2021-01-27 18:02:45 -07:00
parent 3a685bf1a6
commit 74c5687a80
18 changed files with 95 additions and 578 deletions

View File

@@ -1,70 +1,16 @@
+++
title = "SCOPE: C3SR Systems Characterization and Benchmarking Framework"
title = "[tech report] SCOPE: C3SR Systems Characterization and Benchmarking Framework"
date = 2018-09-18
draft = false
# Authors. Comma separated list, e.g. `["Bob Smith", "David Jones"]`.
authors = ["Carl Pearson", "Abdul Dakkak", "Cheng Li", "Sarah Hashash", "Jinjun Xiong", "Wen-Mei Hwu"]
# Publication type.
# Legend:
# 0 = Uncategorized
# 1 = Conference paper
# 2 = Journal article
# 3 = Manuscript
# 4 = Report
# 5 = Book
# 6 = Book section
publication_types = ["4"]
# Publication name and optional abbreviated version.
publication = "arXiv preprint"
publication_short = "arXiv preprint"
# Abstract and optional shortened version.
abstract = "This report presents the design of the Scope infrastructure for extensible and portable benchmarking. Improvements in high-performance computing systems rely on coordination across different levels of system abstraction. Developing and defining accurate performance measurements is necessary at all levels of the system hierarchy, and should be as accessible as possible to developers with different backgrounds. The Scope project aims to lower the barrier to entry for developing performance benchmarks by providing a software architecture that allows benchmarks to be developed independently, by providing useful C/C++ abstractions and utilities, and by providing a Python package for generating publication-quality plots of resulting measurements."
abstract_short = ""
# Does this page contain LaTeX math? (true/false)
math = false
# Does this page require source code highlighting? (true/false)
highlight = false
# Featured image thumbnail (optional)
image_preview = ""
# Is this a selected publication? (true/false)
selected = false
# Projects (optional).
# Associate this publication with one or more of your projects.
# Simply enter your project's folder or file name without extension.
# E.g. `projects = ["deep-learning"]` references
# `content/project/deep-learning/index.md`.
# Otherwise, set `projects = []`.
projects = ["scope"]
# Links (optional)
url_pdf = "pdf/20180918_pearson_arxiv.pdf"
url_preprint = "https://arxiv.org/abs/1809.08311"
url_code = ""
url_dataset = ""
url_project = ""
url_slides = ""
url_video = ""
url_poster = ""
url_source = ""
# Featured image
# To use, add an image named `featured.jpg/png` to your page's folder.
[image]
# Caption (optional)
caption = ""
# Focal point (optional)
# Options: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight
focal_point = ""
tags = ["scope"]
+++
This report presents the design of the Scope infrastructure for extensible and portable benchmarking. Improvements in high-performance computing systems rely on coordination across different levels of system abstraction. Developing and defining accurate performance measurements is necessary at all levels of the system hierarchy, and should be as accessible as possible to developers with different backgrounds. The Scope project aims to lower the barrier to entry for developing performance benchmarks by providing a software architecture that allows benchmarks to be developed independently, by providing useful C/C++ abstractions and utilities, and by providing a Python package for generating publication-quality plots of resulting measurements.
**Carl Pearson, Abdul Dakkak, Cheng Li, Sarah Hashash, Jinjun Xiong, Wen-Mei Hwu**
*arxiv preprint*
This report presents the design of the Scope infrastructure for extensible and portable benchmarking. Improvements in high-performance computing systems rely on coordination across different levels of system abstraction. Developing and defining accurate performance measurements is necessary at all levels of the system hierarchy, and should be as accessible as possible to developers with different backgrounds. The Scope project aims to lower the barrier to entry for developing performance benchmarks by providing a software architecture that allows benchmarks to be developed independently, by providing useful C/C++ abstractions and utilities, and by providing a Python package for generating publication-quality plots of resulting measurements.
* [pdf](/pdf/20180918_pearson_arxiv.pdf)
* [preprint](https://arxiv.org/abs/1809.08311)