From 1d8adebeb00ee8f6d66d4738c91d1155ad146471 Mon Sep 17 00:00:00 2001 From: Bernhard Manfred Gruber Date: Tue, 1 Nov 2022 17:39:32 +0100 Subject: [PATCH] Update README.md --- README.md | 22 +++++++++------------- 1 file changed, 9 insertions(+), 13 deletions(-) diff --git a/README.md b/README.md index 647d720c08..91c7007b16 100644 --- a/README.md +++ b/README.md @@ -11,7 +11,7 @@ LLAMA – Low-Level Abstraction of Memory Access ![LLAMA](docs/images/logo_400x169.png) -LLAMA is a cross-platform C++17 template header-only library for the abstraction of memory +LLAMA is a cross-platform C++17 header-only template library for the abstraction of memory access patterns. It distinguishes between the view of the algorithm on the memory and the real layout in the background. This enables performance portability for multicore, manycore and gpu applications with the very same code. @@ -24,13 +24,12 @@ striding or any other run time or compile time access pattern. To achieve this goal LLAMA is split into mostly independent, orthogonal parts completely written in modern C++17 to run on as many architectures and with as many compilers as possible while still supporting extensions needed e.g. to run -on GPU or other many core hardware. +on GPU or other many-core hardware. Documentation ------------- -The user documentation can be found here: -https://llama-doc.rtfd.io. +The user documentation is available on [Read the Docs](https://llama-doc.rtfd.io). It includes: * Installation instructions @@ -38,15 +37,10 @@ It includes: * Overview of concepts and ideas * Descriptions of LLAMA's constructs -Doxygen generated API documentation is located here: -https://alpaka-group.github.io/llama/. - -We submitted a scientific preprint on LLAMA to arXiv here: -https://arxiv.org/abs/2106.04284. - +An API documentation is generated by Doxygen from the C++ source and is uploaded to your [GitHub Page](https://alpaka-group.github.io/llama/). +We published a paper on LLAMA to the [Wiley Digital Library](https://doi.org/10.1002/spe.3077). We gave a talk on LLAMA at CERN's Compute Accelerator Forum on 2021-05-12. -The video recording (starting at 40:00) and slides are available here: -https://indico.cern.ch/event/975010/. +The video recording (starting at 40:00) and slides are available here on [CERN's Indico](https://indico.cern.ch/event/975010/). Supported compilers ------------------- @@ -81,8 +75,10 @@ Attribution ----------- If you use LLAMA for scientific work, please consider citing this project. -We upload all releases to [zenodo](https://zenodo.org/record/4911494), where you can export a citation in your preferred format. +We upload all releases to [Zenodo](https://zenodo.org/record/4911494), +where you can export a citation in your preferred format. We provide a DOI for each release of LLAMA. +Additionally, consider citing the [LLAMA paper](https://doi.org/10.1002/spe.3077). License -------