Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
bernhardmgruber committed Nov 1, 2022
1 parent b5b594b commit 1d8adeb
Showing 1 changed file with 9 additions and 13 deletions.
22 changes: 9 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ LLAMA – Low-Level Abstraction of Memory Access

![LLAMA](docs/images/logo_400x169.png)

LLAMA is a cross-platform C++17 template header-only library for the abstraction of memory
LLAMA is a cross-platform C++17 header-only template library for the abstraction of memory
access patterns. It distinguishes between the view of the algorithm on
the memory and the real layout in the background. This enables performance
portability for multicore, manycore and gpu applications with the very same code.
Expand All @@ -24,29 +24,23 @@ striding or any other run time or compile time access pattern.
To achieve this goal LLAMA is split into mostly independent, orthogonal parts
completely written in modern C++17 to run on as many architectures and with as
many compilers as possible while still supporting extensions needed e.g. to run
on GPU or other many core hardware.
on GPU or other many-core hardware.

Documentation
-------------

The user documentation can be found here:
https://llama-doc.rtfd.io.
The user documentation is available on [Read the Docs](https://llama-doc.rtfd.io).
It includes:

* Installation instructions
* Motivation and goals
* Overview of concepts and ideas
* Descriptions of LLAMA's constructs

Doxygen generated API documentation is located here:
https://alpaka-group.github.io/llama/.

We submitted a scientific preprint on LLAMA to arXiv here:
https://arxiv.org/abs/2106.04284.

An API documentation is generated by Doxygen from the C++ source and is uploaded to your [GitHub Page](https://alpaka-group.github.io/llama/).
We published a paper on LLAMA to the [Wiley Digital Library](https://doi.org/10.1002/spe.3077).
We gave a talk on LLAMA at CERN's Compute Accelerator Forum on 2021-05-12.
The video recording (starting at 40:00) and slides are available here:
https://indico.cern.ch/event/975010/.
The video recording (starting at 40:00) and slides are available here on [CERN's Indico](https://indico.cern.ch/event/975010/).

Supported compilers
-------------------
Expand Down Expand Up @@ -81,8 +75,10 @@ Attribution
-----------

If you use LLAMA for scientific work, please consider citing this project.
We upload all releases to [zenodo](https://zenodo.org/record/4911494), where you can export a citation in your preferred format.
We upload all releases to [Zenodo](https://zenodo.org/record/4911494),
where you can export a citation in your preferred format.
We provide a DOI for each release of LLAMA.
Additionally, consider citing the [LLAMA paper](https://doi.org/10.1002/spe.3077).

License
-------
Expand Down

0 comments on commit 1d8adeb

Please sign in to comment.