diff --git a/docs/images/mapping.svg b/docs/images/mapping.svg
index 2b4fd63b80..e3c5919c6d 100644
--- a/docs/images/mapping.svg
+++ b/docs/images/mapping.svg
@@ -1,3 +1,3 @@
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/docs/images/overview.svg b/docs/images/overview.svg
index fca97bdff4..235810f326 100644
--- a/docs/images/overview.svg
+++ b/docs/images/overview.svg
@@ -1,3 +1,3 @@
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/docs/index.rst b/docs/index.rst
index 27b092973b..28a3b6c071 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -31,7 +31,7 @@ LLAMA is licensed under the LGPL3+.
pages/install
pages/introduction
- pages/domains
+ pages/dimensions
pages/views
pages/virtualrecord
pages/iteration
diff --git a/docs/pages/api.rst b/docs/pages/api.rst
index 1723b34615..6db207ebf9 100644
--- a/docs/pages/api.rst
+++ b/docs/pages/api.rst
@@ -26,14 +26,14 @@ Useful helpers
:members:
.. doxygenfunction:: llama::structName
-Array domain
------------
+Array dimensions
+----------------
-.. doxygenstruct:: llama::ArrayDomain
+.. doxygenstruct:: llama::ArrayDims
-.. doxygenstruct:: llama::ArrayDomainIndexIterator
+.. doxygenstruct:: llama::ArrayDimsIndexIterator
:members:
-.. doxygenstruct:: llama::ArrayDomainIndexRange
+.. doxygenstruct:: llama::ArrayDimsIndexRange
:members:
Record dimension
@@ -112,11 +112,11 @@ Mappings
Common utilities
^^^^^^^^^^^^^^^^
-.. doxygenstruct:: llama::mapping::LinearizeArrayDomainCpp
+.. doxygenstruct:: llama::mapping::LinearizeArrayDimsCpp
:members:
-.. doxygenstruct:: llama::mapping::LinearizeArrayDomainFortran
+.. doxygenstruct:: llama::mapping::LinearizeArrayDimsFortran
:members:
-.. doxygenstruct:: llama::mapping::LinearizeArrayDomainMorton
+.. doxygenstruct:: llama::mapping::LinearizeArrayDimsMorton
:members:
Tree mapping
diff --git a/docs/pages/blobs.rst b/docs/pages/blobs.rst
index 16c41a6773..8fb46f5f61 100644
--- a/docs/pages/blobs.rst
+++ b/docs/pages/blobs.rst
@@ -62,8 +62,8 @@ Creating a small view of :math:`4 \times 4` may look like this:
.. code-block:: C++
- using ArrayDomain = llama::ArrayDomain<2>;
- constexpr ArrayDomain miniSize{4, 4};
+ using ArrayDims = llama::ArrayDims<2>;
+ constexpr ArrayDims miniSize{4, 4};
using Mapping = /* some simple mapping */;
using BlobAllocator = llama::bloballoc::Stack<
diff --git a/docs/pages/domains.rst b/docs/pages/dimensions.rst
similarity index 85%
rename from docs/pages/domains.rst
rename to docs/pages/dimensions.rst
index 40ef30c72d..3ebaad5aa0 100644
--- a/docs/pages/domains.rst
+++ b/docs/pages/dimensions.rst
@@ -1,37 +1,36 @@
.. include:: common.rst
-.. _label-domains:
+.. _label-dimensions:
Dimensions
==========
-As mentioned in the section before, LLAMA distinguishes between the array domain and the record dimension.
-The most important difference is that the array domain is defined at *run time* whereas the record dimension is defined at *compile time*.
+As mentioned in the section before, LLAMA distinguishes between the array and the record dimensions.
+The most important difference is that the array dimensions are defined at *run time* whereas the record dimension is defined at *compile time*.
This allows to make the problem size itself a run time value but leaves the compiler room to optimize the data access.
.. _label-ad:
-Array domain
-------------
+Array dimensions
+----------------
-The array domain is an :math:`N`-dimensional array with :math:`N` itself being a
+The array dimensions are an :math:`N`-dimensional array with :math:`N` itself being a
compile time value but with run time values inside. LLAMA brings its own
:ref:`array class ` for such kind of data structs which is
ready for interoperability with hardware accelerator C++ dialects such as CUDA
(Nvidia) or HIP (AMD), or abstraction libraries such as the already mentioned
alpaka.
-A definition of a three-dimensional array domain of the size
-:math:`128 \times 256 \times 32` looks like this:
+A definition of three array dimensions of the size :math:`128 \times 256 \times 32` looks like this:
.. code-block:: C++
- llama::ArrayDomain arrayDomainSize{128, 256, 32};
+ llama::ArrayDims arrayDimsSize{128, 256, 32};
The template arguments are deduced by the compiler using `CTAD `_.
-The full type of :cpp:`arrayDomainSize` is :cpp:`llama::ArrayDomain<3>`.
+The full type of :cpp:`arrayDimsSize` is :cpp:`llama::ArrayDims<3>`.
-.. _label-dd:
+.. _label-rd:
Record dimension
----------------
diff --git a/docs/pages/introduction.rst b/docs/pages/introduction.rst
index 93f902929c..1c8d4107aa 100644
--- a/docs/pages/introduction.rst
+++ b/docs/pages/introduction.rst
@@ -100,7 +100,7 @@ The core data structure of LLAMA is the :ref:`View `,
which holds the memory for the data and provides methods to access the data space.
In order to create a view, a `Mapping` is needed which is an abstract concept.
LLAMA offers many kinds of mappings and users can also provide their own mappings.
-Mappings are constructed from a :ref:`record dimension `, containing tags, and an :ref:`Array domain `.
+Mappings are constructed from a :ref:`record dimension `, containing tags, and :ref:`array dimensions `.
In addition to a mapping defining the memory layout, an array of :ref:`Blobs ` is needed for a view, supplying the actual storage behind the view.
A blob is any object representing a contiguous chunk of memory, byte-wise addressable using :cpp:`operator[]`.
A suitable Blob array is either directly provided by the user or built using a :ref:`BlobAllocator ` when a view is created by a call to `allocView`.
@@ -108,8 +108,8 @@ A blob allocator is again an abstract concept and any object returning a blob of
LLAMA comes with a set of predefined blob allocators and users can again provider their own.
Once a view is created, the user can navigate on the data managed by the view.
-On top of a view, a :ref:`VirtualView ` can be created, offering access to a subrange of the array domain.
-Elements of the array domain, called records, are accessed on both, View and VirtualView, by calling :cpp:`operator()` with an instance of the array domain.
+On top of a view, a :ref:`VirtualView ` can be created, offering access to a subspace of the array dimensions.
+Elements of the array dimensions, called records, are accessed on both, View and VirtualView, by calling :cpp:`operator()` with an array dimensions coordinate as instance of :cpp:`ArrayDims`.
This access returns a :ref:`VirtualRecord `, allowing further access using the tags from the record dimension, until eventually a reference to actual data in memory is returned.
diff --git a/docs/pages/iteration.rst b/docs/pages/iteration.rst
index b9b8293bc4..64542f507d 100644
--- a/docs/pages/iteration.rst
+++ b/docs/pages/iteration.rst
@@ -5,18 +5,18 @@
Iteration
=========
-Array domain iterating
-----------------------
+Array dimensions iteration
+--------------------------
-The array domain spans an N-dimensional space of integral indices.
+The array dimensions span an N-dimensional space of integral indices.
Sometimes we just want to quickly iterate over all coordinates in this index space.
-This is what :cpp:`llama::ArrayDomainRange` is for, which is a range in the C++ sense and
+This is what :cpp:`llama::ArrayDimsIndexRange` is for, which is a range in the C++ sense and
offers the :cpp:`begin()` and :cpp:`end()` member functions with corresponding iterators to support STL algorithms or the range-for loop.
.. code-block:: C++
- llama::ArrayDomain<2> ad{3, 3};
- llama::ArrayDomainIndexRange range{ad};
+ llama::ArrayDims<2> ad{3, 3};
+ llama::ArrayDimsIndexRange range{ad};
std::for_each(range.begin(), range.end(), [](auto coord) {
// coord is {0, 0}, {0, 1}, {0, 2}, {1, 0}, {1, 1}, {1, 2}, {2, 0}, {2, 1}, {2, 2}
@@ -27,7 +27,7 @@ offers the :cpp:`begin()` and :cpp:`end()` member functions with corresponding
}
-Record dimension iterating
+Record dimension iteration
--------------------------
The record dimension is iterated using :cpp:`llama::forEachLeaf`.
@@ -114,7 +114,7 @@ Having an iterator to a view opens up the standard library for use in conjunctio
.. code-block:: C++
- using ArrayDomain = llama::ArrayDomain<1>;
+ using ArrayDims = llama::ArrayDims<1>;
// ...
auto view = llama::allocView(mapping);
@@ -136,8 +136,8 @@ Since virtual records interact with each other based on the tags and not the und
.. code-block:: C++
- auto aosView = llama::allocView(llama::mapping::AoS{arrayDomain});
- auto soaView = llama::allocView(llama::mapping::SoA{arrayDomain});
+ auto aosView = llama::allocView(llama::mapping::AoS{arrayDimsSize});
+ auto soaView = llama::allocView(llama::mapping::SoA{arrayDimsSize});
// ...
std::copy(begin(aosView), end(aosView), begin(soaView));
diff --git a/docs/pages/mappings.rst b/docs/pages/mappings.rst
index a137dc488f..adeb50ffbc 100644
--- a/docs/pages/mappings.rst
+++ b/docs/pages/mappings.rst
@@ -5,8 +5,8 @@
Mappings
========
-One of the core tasks of LLAMA is to map an address from the array domain and
-record dimension to some address in the allocated memory space.
+One of the core tasks of LLAMA is to map an address from the array and
+record dimensions to some address in the allocated memory space.
This is particularly challenging if the compiler shall still be able to optimize the resulting
memory accesses (vectorization, reordering, aligned loads, etc.).
The compiler needs to **understand** the semantic of the mapping at compile time.
@@ -26,19 +26,19 @@ The view requires each mapping to fulfill the following concept:
template
concept Mapping = requires(M m) {
- typename M::ArrayDomain;
+ typename M::ArrayDims;
typename M::RecordDim;
{ M::blobCount } -> std::convertible_to;
llama::Array{}; // validates constexpr-ness
{ m.blobSize(std::size_t{}) } -> std::same_as;
- { m.blobNrAndOffset(typename M::ArrayDomain{}) } -> std::same_as;
+ { m.blobNrAndOffset(typename M::ArrayDims{}) } -> std::same_as;
};
-That is, each mapping type needs to expose the types :cpp:`M::ArrayDomain` and :cpp:`M::RecordDim`.
+That is, each mapping type needs to expose the types :cpp:`M::ArrayDims` and :cpp:`M::RecordDim`.
Furthermore, each mapping needs to provide a static constexpr member variable :cpp:`blobCount` and two member functions.
:cpp:`blobSize(i)` gives the size in bytes of the :cpp:`i`\ th block of memory needed for this mapping.
:cpp:`i` is in the range of :cpp:`0` to :cpp:`blobCount - 1`.
-:cpp:`blobNrAndOffset(ad)` implements the core mapping logic by translating a array domain coordinate :cpp:`ad` into a value of :cpp:`llama::NrAndOffset`, containing the blob number of offset within the blob where the value should be stored.
+:cpp:`blobNrAndOffset(ad)` implements the core mapping logic by translating an array coordinate :cpp:`ad` into a value of :cpp:`llama::NrAndOffset`, containing the blob number of offset within the blob where the value should be stored.
AoS mappings
------------
@@ -49,12 +49,12 @@ However, they do not vectorize well in practice.
.. code-block:: C++
- llama::mapping::AoS mapping{arrayDomainSize};
- llama::mapping::AoS mapping{arrayDomainSize}; // respect alignment
- llama::mapping::AoS mapping{arrayDomainSize}; // respect alignment, column major
+ llama::mapping::AoS mapping{arrayDimsSize};
+ llama::mapping::AoS mapping{arrayDimsSize}; // respect alignment
+ llama::mapping::AoS mapping{arrayDimsSize}; // respect alignment, column major
-By default, the :cpp:`ArrayDomain` is linearized using :cpp:`llama::mapping::LinearizeArrayDomainCpp` and the layout is tightly packed.
+By default, the :cpp:`ArrayDims` is linearized using :cpp:`llama::mapping::LinearizeArrayDimsCpp` and the layout is tightly packed.
LLAMA provides the aliases :cpp:`llama::mapping::AlignedAoS` and :cpp:`llama::mapping::PackedAoS` for convenience.
@@ -63,7 +63,7 @@ but, since the mapping code is more complicated, compilers currently fail to aut
.. code-block:: C++
- llama::mapping::AoSoA mapping{arrayDomainSize};
+ llama::mapping::AoSoA mapping{arrayDimsSize};
.. _label-tree-mapping:
@@ -77,12 +77,12 @@ This layout auto vectorizes well in practice.
.. code-block:: C++
- llama::mapping::SoA mapping{arrayDomainSize};
- llama::mapping::SoA mapping{arrayDomainSize}; // separate blob for each attribute
- llama::mapping::SoA mapping{arrayDomainSize}; // separate blob for each attribute, column major
+ llama::mapping::SoA mapping{arrayDimsSize};
+ llama::mapping::SoA mapping{arrayDimsSize}; // separate blob for each attribute
+ llama::mapping::SoA mapping{arrayDimsSize}; // separate blob for each attribute, column major
-By default, the :cpp:`ArrayDomain` is linearized using :cpp:`llama::mapping::LinearizeArrayDomainCpp` and the layout is mapped into a single blob.
+By default, the :cpp:`ArrayDims` is linearized using :cpp:`llama::mapping::LinearizeArrayDimsCpp` and the layout is mapped into a single blob.
LLAMA provides the aliases :cpp:`llama::mapping::SingleBlobSoA` and :cpp:`llama::mapping::MultiBlobSoA` for convenience.
@@ -96,25 +96,25 @@ The AoSoA mapping has a mandatory additional parameter specifying the number of
.. code-block:: C++
- llama::mapping::AoSoA mapping{arrayDomainSize}; // inner array has 8 values
- llama::mapping::AoSoA mapping{arrayDomainSize}; // inner array has 8 values, column major
+ llama::mapping::AoSoA mapping{arrayDimsSize}; // inner array has 8 values
+ llama::mapping::AoSoA mapping{arrayDimsSize}; // inner array has 8 values, column major
-By default, the :cpp:`ArrayDomain` is linearized using :cpp:`llama::mapping::LinearizeArrayDomainCpp`.
+By default, the :cpp:`ArrayDims` is linearized using :cpp:`llama::mapping::LinearizeArrayDimsCpp`.
LLAMA also provides a helper :cpp:`llama::mapping::maxLanes` which can be used to determine the maximum vector lanes which can be used for a given record dimension and vector register size.
In this example, the inner array a size of N so even the largest type in the record dimension can fit N times into a vector register of 256bits size (e.g. AVX2).
.. code-block:: C++
- llama::mapping::AoSoA> mapping{arrayDomainSize};
+ llama::mapping::AoSoA> mapping{arrayDimsSize};
One mapping
-----------
-The One mapping is intended to map all coordinates in the array domain onto the same memory location.
+The One mapping is intended to map all coordinates in the array dimensions onto the same memory location.
This is commonly used in the `llama::One` virtual record, but also offers interesting applications in conjunction with the `llama::mapping::Split` mapping.
@@ -129,9 +129,9 @@ The remaining record dimension is mapped using a second mapping.
.. code-block:: C++
- llama::mapping::Split, llama::mapping::SoA, llama::mapping::PackedAoS>
- mapping{arrayDomainSize}; // maps the subtree at index 1 as SoA, the rest as packed AoS
+ mapping{arrayDimsSize}; // maps the subtree at index 1 as SoA, the rest as packed AoS
Split mappings can be nested to map a record dimension into even fancier combinations.
@@ -145,7 +145,7 @@ WARNING: The tree mapping is currently not maintained and we consider deprecatio
The LLAMA tree mapping is one approach to archieve the goal of mixing different mapping approaches.
Furthermore, it tries to establish a general mapping description language and mapping definition framework.
-Let's take the example record dimension from the :ref:`domain section`:
+Let's take the example record dimension from the :ref:`dimensions section`:
.. image:: ../images/layout_tree.svg
@@ -155,9 +155,9 @@ representing the repetition of branches and to define tree operations which
create new trees out of the old ones while providing methods to translate tree
coordinates from one tree to another.
-This is best demonstrated by an example. First of all the array domain needs to be
-represented as such an tree too. Let's assume a array domain of
-:math:`128 \times 64`:
+This is best demonstrated by an example.
+First of all the array dimensions needs to be represented as such an tree too.
+Let's assume array dimensions of :math:`128 \times 64`:
.. image:: ../images/ud_tree_2.svg
@@ -166,8 +166,8 @@ The record dimension is already a tree, but as it has no run time influence, onl
.. image:: ../images/layout_tree_2.svg
-Now the two trees are connected so that we can represent array domain and record
-dimension with one tree:
+Now the two trees are connected so that we can represent array and record
+dimensions with one tree:
.. image:: ../images/start_tree_2.svg
@@ -189,7 +189,7 @@ Struct of array but with a padding after each 1024 elements may look like this:
The size of the leaf type in "pad" of course needs to be determined based on the
desired aligment and sub tree sizes.
-Such a tree (with smaller array domain for easier drawing) …
+Such a tree (with smaller array dimensions for easier drawing) …
.. image:: ../images/example_tree.svg
@@ -208,19 +208,19 @@ a further constructor parameter for the instantiation of this tuple.
};
using Mapping = llama::mapping::tree::Mapping<
- ArrayDomain,
+ ArrayDims,
RecordDim,
decltype(treeOperationList)
>;
Mapping mapping(
- arrayDomainSize,
+ arrayDimsSize,
treeOperationList
);
// or using CTAD and an unused argument for the record dimension:
llama::mapping::tree::Mapping mapping(
- arrayDomainSize,
+ arrayDimsSize,
llama::Tuple{
llama::mapping::tree::functor::LeafOnlyRT()
},
diff --git a/docs/pages/views.rst b/docs/pages/views.rst
index 85776d1458..32a52d1515 100644
--- a/docs/pages/views.rst
+++ b/docs/pages/views.rst
@@ -5,11 +5,9 @@
View
====
-The view is the main data structure a LLAMA user will work with. It takes
-coordinates in the array domain and record dimension and returns a reference to a record
-in memory which can be read from or written to. For easier use, some
-useful operations such as :cpp:`+=` are overloaded to operate on all record
-fields inside the record dimension at once.
+The view is the main data structure a LLAMA user will work with.
+It takes coordinates in the array and record dimensions and returns a reference to a record in memory which can be read from or written to.
+For easier use, some useful operations such as :cpp:`+=` are overloaded to operate on all record fields inside the record dimension at once.
.. _label-factory:
@@ -22,7 +20,7 @@ A view is allocated using the helper function :cpp:`allocView`, which takes a
.. code-block:: C++
using Mapping = ...; // see next section about mappings
- Mapping mapping(arrayDomainSize); // see section about domains
+ Mapping mapping(arrayDimsSize); // see section about dimensions
auto view = allocView(mapping); // optional blob allocator as 2nd argument
The :ref:`mapping ` and :ref:`blob allocator `
@@ -36,24 +34,24 @@ Data access
LLAMA tries to have an array of struct like interface.
When accessing an element of the view, the array part comes first, followed by tags from the record dimension.
-In C++, runtime values like the array domain coordinate are normal function parameters
+In C++, runtime values like the array dimensions coordinates are normal function parameters
whereas compile time values such as the record dimension tags are usually given as template arguments.
However, compile time information can be stored in a type, instantiated as a value and then passed to a function template deducing the type again.
This trick allows to pass both, runtime and compile time values as function arguments.
E.g. instead of calling :cpp:`f()` we can call :cpp:`f(MyType{})` and let the compiler deduce the template argument of :cpp:`f`.
This trick is used in LLAMA to specify the access to a value of a view.
-An example access with the domains defined in the :ref:`domain section ` could look like this:
+An example access with the dimensions defined in the :ref:`dimensions section ` could look like this:
.. code-block:: C++
view(1, 2, 3)(color{}, g{}) = 1.0;
-It is also possible to access the array domain with one compound argument like this:
+It is also possible to access the array dimensions with one compound argument like this:
.. code-block:: C++
- const ArrayDomain pos{1, 2, 3};
+ const ArrayDims pos{1, 2, 3};
view(pos)(color{}, g{}) = 1.0;
// or
view({1, 2, 3})(color{}, g{}) = 1.0;
@@ -82,7 +80,7 @@ This object is a central data type of LLAMA called :cpp:`llama::VirtualRecord`.
VirtualView
-----------
-Virtual views can be created on top of existing views, offering shifted access to a subrange of the array domain.
+Virtual views can be created on top of existing views, offering shifted access to a subspace of the array dimensions.
.. code-block:: C++
diff --git a/docs/pages/virtualrecord.rst b/docs/pages/virtualrecord.rst
index 460cf85f5d..f220a03343 100644
--- a/docs/pages/virtualrecord.rst
+++ b/docs/pages/virtualrecord.rst
@@ -28,8 +28,8 @@ This object is a :cpp:`llama::VirtualRecord`.
float& g = vdColor(g{});
g = 1.0;
-Supplying the array domain coordinates to a view access returns such a :cpp:`llama::VirtualRecord`, storing this array domain coordiante.
-This object can be thought of like a record in the :math:`N`-dimensional array domain space,
+Supplying the array dimensions coordinate to a view access returns such a :cpp:`llama::VirtualRecord`, storing this array dimensions coordiante.
+This object can be thought of like a record in the :math:`N`-dimensional array dimensions space,
but as the fields of this record may not be contiguous in memory, it is not a real object in the C++ sense and thus called virtual.
Accessing subparts of a :cpp:`llama::VirtualRecord` is done using `operator()` and the tag types from the record dimension.
diff --git a/examples/alpaka/asyncblur/asyncblur.cpp b/examples/alpaka/asyncblur/asyncblur.cpp
index d8193bc3d2..bd2aea0f72 100644
--- a/examples/alpaka/asyncblur/asyncblur.cpp
+++ b/examples/alpaka/asyncblur/asyncblur.cpp
@@ -86,7 +86,7 @@ struct BlurKernel
// Using SoA for the shared memory
constexpr auto sharedChunkSize = ElemsPerBlock + 2 * KernelSize;
const auto sharedMapping = llama::mapping::SoA(
- typename View::ArrayDomain{sharedChunkSize, sharedChunkSize},
+ typename View::ArrayDims{sharedChunkSize, sharedChunkSize},
typename View::RecordDim{});
constexpr auto sharedMemSize = llama::sizeOf * sharedChunkSize * sharedChunkSize;
auto& sharedMem = alpaka::declareSharedVar(acc);
@@ -105,8 +105,8 @@ struct BlurKernel
const std::size_t bStart[2]
= {bi[0] * ElemsPerBlock + threadIdxInBlock[0], bi[1] * ElemsPerBlock + threadIdxInBlock[1]};
const std::size_t bEnd[2] = {
- alpaka::math::min(acc, bStart[0] + ElemsPerBlock + 2 * KernelSize, oldImage.mapping.arrayDomainSize[0]),
- alpaka::math::min(acc, bStart[1] + ElemsPerBlock + 2 * KernelSize, oldImage.mapping.arrayDomainSize[1]),
+ alpaka::math::min(acc, bStart[0] + ElemsPerBlock + 2 * KernelSize, oldImage.mapping.arrayDimsSize[0]),
+ alpaka::math::min(acc, bStart[1] + ElemsPerBlock + 2 * KernelSize, oldImage.mapping.arrayDimsSize[1]),
};
LLAMA_INDEPENDENT_DATA
for (auto y = bStart[0]; y < bEnd[0]; y += threadsPerBlock)
@@ -119,8 +119,8 @@ struct BlurKernel
const std::size_t start[2] = {ti[0] * Elems, ti[1] * Elems};
const std::size_t end[2] = {
- alpaka::math::min(acc, start[0] + Elems, oldImage.mapping.arrayDomainSize[0] - 2 * KernelSize),
- alpaka::math::min(acc, start[1] + Elems, oldImage.mapping.arrayDomainSize[1] - 2 * KernelSize),
+ alpaka::math::min(acc, start[0] + Elems, oldImage.mapping.arrayDimsSize[0] - 2 * KernelSize),
+ alpaka::math::min(acc, start[1] + Elems, oldImage.mapping.arrayDimsSize[1] - 2 * KernelSize),
};
LLAMA_INDEPENDENT_DATA
@@ -208,12 +208,12 @@ try
}
// LLAMA
- using ArrayDomain = llama::ArrayDomain<2>;
+ using ArrayDims = llama::ArrayDims<2>;
auto treeOperationList = llama::Tuple{llama::mapping::tree::functor::LeafOnlyRT()};
- const auto hostMapping = llama::mapping::tree::Mapping{ArrayDomain{buffer_y, buffer_x}, treeOperationList, Pixel{}};
+ const auto hostMapping = llama::mapping::tree::Mapping{ArrayDims{buffer_y, buffer_x}, treeOperationList, Pixel{}};
const auto devMapping = llama::mapping::tree::Mapping{
- ArrayDomain{CHUNK_SIZE + 2 * KERNEL_SIZE, CHUNK_SIZE + 2 * KERNEL_SIZE},
+ ArrayDims{CHUNK_SIZE + 2 * KERNEL_SIZE, CHUNK_SIZE + 2 * KERNEL_SIZE},
treeOperationList,
PixelOnAcc{}};
@@ -298,14 +298,14 @@ try
struct VirtualHostElement
{
llama::VirtualView virtualHost;
- const ArrayDomain validMiniSize;
+ const ArrayDims validMiniSize;
};
std::list virtualHostList;
for (std::size_t chunk_y = 0; chunk_y < chunks[0]; ++chunk_y)
for (std::size_t chunk_x = 0; chunk_x < chunks[1]; ++chunk_x)
{
// Create virtual view with size of mini view
- const ArrayDomain validMiniSize{
+ const ArrayDims validMiniSize{
((chunk_y < chunks[0] - 1) ? CHUNK_SIZE : (img_y - 1) % CHUNK_SIZE + 1) + 2 * KERNEL_SIZE,
((chunk_x < chunks[1] - 1) ? CHUNK_SIZE : (img_x - 1) % CHUNK_SIZE + 1) + 2 * KERNEL_SIZE};
llama::VirtualView virtualHost(hostView, {chunk_y * CHUNK_SIZE, chunk_x * CHUNK_SIZE}, validMiniSize);
diff --git a/examples/alpaka/nbody/nbody.cpp b/examples/alpaka/nbody/nbody.cpp
index effb89b6e1..a578372243 100644
--- a/examples/alpaka/nbody/nbody.cpp
+++ b/examples/alpaka/nbody/nbody.cpp
@@ -153,18 +153,18 @@ struct UpdateKernel
{
// if there is only 1 thread per block, use stack instead of shared memory
if constexpr (BlockSize == 1)
- return llama::allocViewStack();
+ return llama::allocViewStack();
else
{
constexpr auto sharedMapping = []
{
- constexpr auto arrayDomain = llama::ArrayDomain{BlockSize};
+ constexpr auto arrayDims = llama::ArrayDims{BlockSize};
if constexpr (MappingSM == AoS)
- return llama::mapping::AoS{arrayDomain, Particle{}};
+ return llama::mapping::AoS{arrayDims, Particle{}};
if constexpr (MappingSM == SoA)
- return llama::mapping::SoA{arrayDomain, Particle{}};
+ return llama::mapping::SoA{arrayDims, Particle{}};
if constexpr (MappingSM == AoSoA)
- return llama::mapping::AoSoA{arrayDomain};
+ return llama::mapping::AoSoA{arrayDims};
}();
static_assert(decltype(sharedMapping)::blobCount == 1);
@@ -180,9 +180,9 @@ struct UpdateKernel
// TODO: we could optimize here, because only velocity is ever updated
auto pi = [&]
{
- constexpr auto arrayDomain = llama::ArrayDomain{Elems};
+ constexpr auto arrayDims = llama::ArrayDims{Elems};
constexpr auto mapping
- = llama::mapping::SoA{arrayDomain};
+ = llama::mapping::SoA{arrayDims};
constexpr auto blobAlloc = llama::bloballoc::Stack * Elems>{};
return llama::allocView(mapping, blobAlloc);
}();
@@ -264,15 +264,15 @@ void run(std::ostream& plotFile)
auto mapping = []
{
- const auto arrayDomain = llama::ArrayDomain{PROBLEM_SIZE};
+ const auto arrayDims = llama::ArrayDims{PROBLEM_SIZE};
if constexpr (MappingGM == AoS)
- return llama::mapping::AoS{arrayDomain, Particle{}};
+ return llama::mapping::AoS{arrayDims, Particle{}};
if constexpr (MappingGM == SoA)
- return llama::mapping::SoA{arrayDomain, Particle{}};
+ return llama::mapping::SoA{arrayDims, Particle{}};
// if constexpr (MappingGM == 2)
- // return llama::mapping::SoA{arrayDomain};
+ // return llama::mapping::SoA{arrayDims};
if constexpr (MappingGM == AoSoA)
- return llama::mapping::AoSoA{arrayDomain};
+ return llama::mapping::AoSoA{arrayDims};
}();
Stopwatch watch;
diff --git a/examples/alpaka/vectoradd/vectoradd.cpp b/examples/alpaka/vectoradd/vectoradd.cpp
index 8239f38695..276f110840 100644
--- a/examples/alpaka/vectoradd/vectoradd.cpp
+++ b/examples/alpaka/vectoradd/vectoradd.cpp
@@ -80,21 +80,21 @@ try
Queue queue(devAcc);
// LLAMA
- const auto arrayDomain = llama::ArrayDomain{PROBLEM_SIZE};
+ const auto arrayDims = llama::ArrayDims{PROBLEM_SIZE};
const auto mapping = [&]
{
if constexpr (MAPPING == 0)
- return llama::mapping::AoS{arrayDomain, Vector{}};
+ return llama::mapping::AoS{arrayDims, Vector{}};
if constexpr (MAPPING == 1)
- return llama::mapping::SoA{arrayDomain, Vector{}};
+ return llama::mapping::SoA{arrayDims, Vector{}};
if constexpr (MAPPING == 2)
- return llama::mapping::SoA{arrayDomain};
+ return llama::mapping::SoA{arrayDims};
if constexpr (MAPPING == 3)
- return llama::mapping::tree::Mapping{arrayDomain, llama::Tuple{}, Vector{}};
+ return llama::mapping::tree::Mapping{arrayDims, llama::Tuple{}, Vector{}};
if constexpr (MAPPING == 4)
return llama::mapping::tree::Mapping{
- arrayDomain,
+ arrayDims,
llama::Tuple{llama::mapping::tree::functor::LeafOnlyRT()},
Vector{}};
}();
diff --git a/examples/bufferguard/bufferguard.cpp b/examples/bufferguard/bufferguard.cpp
index 9c8b91fa9c..a4f3281f16 100644
--- a/examples/bufferguard/bufferguard.cpp
+++ b/examples/bufferguard/bufferguard.cpp
@@ -21,18 +21,18 @@ using Vector = llama::Record<
>;
// clang-format on
-template typename InnerMapping, typename T_ArrayDomain, typename T_RecordDim>
+template typename InnerMapping, typename T_ArrayDims, typename T_RecordDim>
struct GuardMapping2D
{
- static_assert(std::is_same_v>, "Only 2D arrays are implemented");
+ static_assert(std::is_same_v>, "Only 2D arrays are implemented");
- using ArrayDomain = T_ArrayDomain;
+ using ArrayDims = T_ArrayDims;
using RecordDim = T_RecordDim;
constexpr GuardMapping2D() = default;
- constexpr explicit GuardMapping2D(ArrayDomain size, RecordDim = {})
- : arrayDomainSize(size)
+ constexpr explicit GuardMapping2D(ArrayDims size, RecordDim = {})
+ : arrayDimsSize(size)
, left({size[0] - 2})
, right({size[0] - 2})
, top({size[1] - 2})
@@ -65,11 +65,11 @@ struct GuardMapping2D
}
template
- constexpr auto blobNrAndOffset(ArrayDomain coord) const -> llama::NrAndOffset
+ constexpr auto blobNrAndOffset(ArrayDims coord) const -> llama::NrAndOffset
{
// [0][0] is at left top
const auto [row, col] = coord;
- const auto [rowMax, colMax] = arrayDomainSize;
+ const auto [rowMax, colMax] = arrayDimsSize;
if (col == 0)
{
@@ -154,15 +154,15 @@ struct GuardMapping2D
return a;
}
- llama::mapping::One leftTop;
- llama::mapping::One leftBot;
- llama::mapping::One rightTop;
- llama::mapping::One rightBot;
- InnerMapping, RecordDim> left;
- InnerMapping, RecordDim> right;
- InnerMapping, RecordDim> top;
- InnerMapping, RecordDim> bot;
- InnerMapping, RecordDim> center;
+ llama::mapping::One leftTop;
+ llama::mapping::One leftBot;
+ llama::mapping::One rightTop;
+ llama::mapping::One rightBot;
+ InnerMapping, RecordDim> left;
+ InnerMapping, RecordDim> right;
+ InnerMapping, RecordDim> top;
+ InnerMapping, RecordDim> bot;
+ InnerMapping, RecordDim> center;
static constexpr auto leftTopOff = std::size_t{0};
static constexpr auto leftBotOff = leftTopOff + decltype(leftTop)::blobCount;
@@ -177,7 +177,7 @@ struct GuardMapping2D
public:
static constexpr auto blobCount = centerOff + decltype(center)::blobCount;
- ArrayDomain arrayDomainSize;
+ ArrayDims arrayDimsSize;
};
template
@@ -202,8 +202,8 @@ void run(const std::string& mappingName)
constexpr auto rows = 7;
constexpr auto cols = 5;
- const auto arrayDomain = llama::ArrayDomain{rows, cols};
- const auto mapping = GuardMapping2D, Vector>{arrayDomain};
+ const auto arrayDims = llama::ArrayDims{rows, cols};
+ const auto mapping = GuardMapping2D, Vector>{arrayDims};
std::ofstream{"bufferguard_" + mappingName + ".svg"} << llama::toSvg(mapping);
auto view1 = allocView(mapping);
diff --git a/examples/cuda/nbody/nbody.cu b/examples/cuda/nbody/nbody.cu
index 3409b42a39..3aa682ee38 100644
--- a/examples/cuda/nbody/nbody.cu
+++ b/examples/cuda/nbody/nbody.cu
@@ -79,17 +79,19 @@ template {arrayDomain};
+ return llama::mapping::AoSoA{arrayDims};
}();
llama::Array sharedMems{};
@@ -183,24 +185,25 @@ try
title += " Acc";
std::cout << '\n' << title << '\n';
- auto mapping = [] {
- const auto arrayDomain = llama::ArrayDomain{PROBLEM_SIZE};
+ auto mapping = []
+ {
+ const auto arrayDims = llama::ArrayDims{PROBLEM_SIZE};
if constexpr (Mapping == 0)
- return llama::mapping::AoS{arrayDomain, Particle{}};
+ return llama::mapping::AoS{arrayDims, Particle{}};
if constexpr (Mapping == 1)
- return llama::mapping::SoA{arrayDomain, Particle{}};
+ return llama::mapping::SoA{arrayDims, Particle{}};
if constexpr (Mapping == 2)
- return llama::mapping::SoA{arrayDomain, Particle{}, std::true_type{}};
+ return llama::mapping::SoA{arrayDims, Particle{}, std::true_type{}};
if constexpr (Mapping == 3)
- return llama::mapping::AoSoA{arrayDomain};
+ return llama::mapping::AoSoA{arrayDims};
if constexpr (Mapping == 4)
return llama::mapping::Split<
- decltype(arrayDomain),
+ decltype(arrayDims),
Particle,
llama::RecordCoord<1>,
llama::mapping::SoA,
llama::mapping::SoA,
- true>{arrayDomain};
+ true>{arrayDims};
}();
Stopwatch watch;
diff --git a/examples/heatequation/heatequation.cpp b/examples/heatequation/heatequation.cpp
index a0e930a87e..298f97caae 100644
--- a/examples/heatequation/heatequation.cpp
+++ b/examples/heatequation/heatequation.cpp
@@ -109,7 +109,7 @@ try
return 1;
}
- const auto mapping = llama::mapping::SoA{llama::ArrayDomain{extent}, double{}};
+ const auto mapping = llama::mapping::SoA{llama::ArrayDims{extent}, double{}};
auto uNext = llama::allocView(mapping);
auto uCurr = llama::allocView(mapping);
diff --git a/examples/nbody/nbody.cpp b/examples/nbody/nbody.cpp
index 4f41e14018..1a51c8d01c 100644
--- a/examples/nbody/nbody.cpp
+++ b/examples/nbody/nbody.cpp
@@ -130,23 +130,23 @@ namespace usellama
Stopwatch watch;
auto mapping = [&]
{
- const auto arrayDomain = llama::ArrayDomain{PROBLEM_SIZE};
+ const auto arrayDims = llama::ArrayDims{PROBLEM_SIZE};
if constexpr (Mapping == 0)
- return llama::mapping::AoS{arrayDomain, Particle{}};
+ return llama::mapping::AoS{arrayDims, Particle{}};
if constexpr (Mapping == 1)
- return llama::mapping::SoA{arrayDomain, Particle{}};
+ return llama::mapping::SoA{arrayDims, Particle{}};
if constexpr (Mapping == 2)
- return llama::mapping::SoA{arrayDomain};
+ return llama::mapping::SoA{arrayDims};
if constexpr (Mapping == 3)
- return llama::mapping::AoSoA{arrayDomain};
+ return llama::mapping::AoSoA{arrayDims};
if constexpr (Mapping == 4)
return llama::mapping::Split<
- decltype(arrayDomain),
+ decltype(arrayDims),
Particle,
llama::RecordCoord<1>,
llama::mapping::PreconfiguredSoA<>::type,
llama::mapping::PreconfiguredSoA<>::type,
- true>{arrayDomain};
+ true>{arrayDims};
}();
if constexpr (DUMP_MAPPING)
std::ofstream(title + ".svg") << llama::toSvg(mapping);
diff --git a/examples/nbody_benchmark/nbody.cpp b/examples/nbody_benchmark/nbody.cpp
index 421ebacda7..deff919165 100644
--- a/examples/nbody_benchmark/nbody.cpp
+++ b/examples/nbody_benchmark/nbody.cpp
@@ -85,13 +85,13 @@ void run(std::ostream& plotFile)
auto mapping = [&]
{
- const auto arrayDomain = llama::ArrayDomain{PROBLEM_SIZE};
+ const auto arrayDims = llama::ArrayDims{PROBLEM_SIZE};
if constexpr (Mapping == 0)
- return llama::mapping::AoS{arrayDomain, Particle{}};
+ return llama::mapping::AoS{arrayDims, Particle{}};
if constexpr (Mapping == 1)
- return llama::mapping::SoA{arrayDomain, Particle{}};
+ return llama::mapping::SoA{arrayDims, Particle{}};
if constexpr (Mapping == 2)
- return llama::mapping::SoA{arrayDomain};
+ return llama::mapping::SoA{arrayDims};
}();
auto particles = llama::allocView(std::move(mapping), llama::bloballoc::Vector{});
diff --git a/examples/simpletest/simpletest.cpp b/examples/simpletest/simpletest.cpp
index cfe1147c09..0134cae687 100644
--- a/examples/simpletest/simpletest.cpp
+++ b/examples/simpletest/simpletest.cpp
@@ -6,8 +6,7 @@
* itself is not under the public domain but LGPL3+.
*/
-/// Simple example for LLAMA showing how to define an array domain and a record dimension, to create a view and to
-/// access the data
+/// Simple example for LLAMA showing how to define array and record dimensions, to create a view and to access the data.
#include
#include
@@ -32,7 +31,10 @@ namespace st
struct Options{};
} // namespace st
-/// A record dimension in LLAMA is a type, probably always a \ref llama::Record. This takes a template list of all members of this struct-like dimension. Every member needs to be a \ref llama::Field. A Field is a list of two elements itself, first the name of the element as type and secondly the element type itself, which may be a nested Record.
+/// A record dimension in LLAMA is a type, probably always a \ref llama::Record. This takes a template list of all
+/// members of this struct-like dimension. Every member needs to be a \ref llama::Field. A Field is a list of two
+/// elements itself, first the name of the element as type and secondly the element type itself, which may be a nested
+/// Record.
using Name = llama::Record<
llama::Field,
@@ -125,36 +127,36 @@ struct SetZeroFunctor
auto main() -> int
try
{
- // Defining a two-dimensional array domain
- using UD = llama::ArrayDomain<2>;
- // Setting the run time size of the array domain to 8192 * 8192
- UD udSize{8192, 8192};
+ // Defining two array dimensions
+ using ArrayDims = llama::ArrayDims<2>;
+ // Setting the run time size of the array dimensions to 8192 * 8192
+ ArrayDims adSize{8192, 8192};
- // Printing dimension/domain informations at runtime
+ // Printing dimensions information at runtime
std::cout << "Record dimension is " << addLineBreaks(type(Name())) << '\n';
std::cout << "AoS address of (0,100) <0,1>: "
- << llama::mapping::AoS(udSize).blobNrAndOffset<0, 1>({0, 100}).offset << '\n';
+ << llama::mapping::AoS(adSize).blobNrAndOffset<0, 1>({0, 100}).offset << '\n';
std::cout << "SoA address of (0,100) <0,1>: "
- << llama::mapping::SoA(udSize).blobNrAndOffset<0, 1>({0, 100}).offset << '\n';
+ << llama::mapping::SoA(adSize).blobNrAndOffset<0, 1>({0, 100}).offset << '\n';
std::cout << "sizeOf RecordDim: " << llama::sizeOf << '\n';
std::cout << type(llama::GetCoordFromTags()) << '\n';
// chosing a native struct of array mapping for this simple test example
- using Mapping = llama::mapping::SoA;
+ using Mapping = llama::mapping::SoA;
- // Instantiating the mapping with the array domain size
- Mapping mapping(udSize);
+ // Instantiating the mapping with the array dimensions size
+ Mapping mapping(adSize);
// getting a view with memory from the default allocator
auto view = allocView(mapping);
- // defining a position in the array domain
- const UD pos{0, 0};
+ // defining a position in the array dimensions
+ const ArrayDims pos{0, 0};
st::Options Options_;
const auto Weight_ = st::Weight{};
- // using the position in the array domain and a tree coord or a uid in the
+ // using the position in the array dimensions and a tree coord or a uid in the
// record dimension to get the reference to an element in the view
float& position_x = view(pos)(llama::RecordCoord<0, 0>{});
double& momentum_z = view[pos](st::Momentum{}, st::Z{});
@@ -173,13 +175,12 @@ try
std::cout << &options_2 << " " << reinterpret_cast(&options_2) - reinterpret_cast(&weight)
<< '\n';
- // iterating over the array domain at run time to do some stuff with the
- // allocated data
- for (size_t x = 0; x < udSize[0]; ++x)
+ // iterating over the array dimensions at run time to do some stuff with the allocated data
+ for (size_t x = 0; x < adSize[0]; ++x)
// telling the compiler that all data in the following loop is
// independent to each other and thus can be vectorized
LLAMA_INDEPENDENT_DATA
- for (size_t y = 0; y < udSize[1]; ++y)
+ for (size_t y = 0; y < adSize[1]; ++y)
{
// Defining a functor for a given virtual record
SetZeroFunctor szf{view(x, y)};
@@ -189,14 +190,13 @@ try
// Applying the functor for the sub tree momentum (0), so basically
// for momentum.z, and momentum.x
llama::forEachLeaf(szf, st::Momentum{});
- // the array domain address can be given as multiple comma separated
- // arguments or as one parameter of type array domain
- view({x, y}) = double(x + y) / double(udSize[0] + udSize[1]);
+ // the array dimensions can be given as multiple comma separated arguments or as one parameter of type ArrayDims
+ view({x, y}) = double(x + y) / double(adSize[0] + adSize[1]);
}
- for (size_t x = 0; x < udSize[0]; ++x)
+ for (size_t x = 0; x < adSize[0]; ++x)
LLAMA_INDEPENDENT_DATA
- for (size_t y = 0; y < udSize[1]; ++y)
+ for (size_t y = 0; y < adSize[1]; ++y)
{
// Showing different options of access data with llama. Internally
// all do the same data- and mappingwise
@@ -210,9 +210,9 @@ try
}
double sum = 0.0;
LLAMA_INDEPENDENT_DATA
- for (size_t x = 0; x < udSize[0]; ++x)
+ for (size_t x = 0; x < adSize[0]; ++x)
LLAMA_INDEPENDENT_DATA
- for (size_t y = 0; y < udSize[1]; ++y)
+ for (size_t y = 0; y < adSize[1]; ++y)
sum += view(x, y)(llama::RecordCoord<1, 0>{});
std::cout << "Sum: " << sum << '\n';
diff --git a/examples/vectoradd/vectoradd.cpp b/examples/vectoradd/vectoradd.cpp
index 47850b44d8..46cac2cc36 100644
--- a/examples/vectoradd/vectoradd.cpp
+++ b/examples/vectoradd/vectoradd.cpp
@@ -48,18 +48,18 @@ namespace usellama
const auto mapping = [&]
{
- const auto arrayDomain = llama::ArrayDomain{PROBLEM_SIZE};
+ const auto arrayDims = llama::ArrayDims{PROBLEM_SIZE};
if constexpr (MAPPING == 0)
- return llama::mapping::AoS{arrayDomain, Vector{}};
+ return llama::mapping::AoS{arrayDims, Vector{}};
if constexpr (MAPPING == 1)
- return llama::mapping::SoA{arrayDomain, Vector{}};
+ return llama::mapping::SoA{arrayDims, Vector{}};
if constexpr (MAPPING == 2)
- return llama::mapping::SoA{arrayDomain};
+ return llama::mapping::SoA{arrayDims};
if constexpr (MAPPING == 3)
- return llama::mapping::tree::Mapping{arrayDomain, llama::Tuple{}, Vector{}};
+ return llama::mapping::tree::Mapping{arrayDims, llama::Tuple{}, Vector{}};
if constexpr (MAPPING == 4)
return llama::mapping::tree::Mapping{
- arrayDomain,
+ arrayDims,
llama::Tuple{llama::mapping::tree::functor::LeafOnlyRT()},
Vector{}};
}();
diff --git a/examples/viewcopy/viewcopy.cpp b/examples/viewcopy/viewcopy.cpp
index aa281b0708..47d2171168 100644
--- a/examples/viewcopy/viewcopy.cpp
+++ b/examples/viewcopy/viewcopy.cpp
@@ -41,7 +41,7 @@ namespace llamaex
using namespace llama;
template
- void parallelForEachADCoord(ArrayDomain adSize, std::size_t numThreads, Func&& func)
+ void parallelForEachADCoord(ArrayDims adSize, std::size_t numThreads, Func&& func)
{
#pragma omp parallel for num_threads(numThreads)
for (std::ptrdiff_t i = 0; i < static_cast(adSize[0]); i++)
@@ -49,7 +49,7 @@ namespace llamaex
if constexpr (Dim > 1)
forEachADCoord(internal::popFront(adSize), std::forward(func), static_cast(i));
else
- std::forward(func)(ArrayDomain{static_cast(i)});
+ std::forward(func)(ArrayDims{static_cast(i)});
}
}
} // namespace llamaex
@@ -62,11 +62,11 @@ void naive_copy(
{
static_assert(std::is_same_v);
- if (srcView.mapping.arrayDomainSize != dstView.mapping.arrayDomainSize)
- throw std::runtime_error{"UserDomain sizes are different"};
+ if (srcView.mapping.arrayDimsSize != dstView.mapping.arrayDimsSize)
+ throw std::runtime_error{"Array dimensions sizes are different"};
llamaex::parallelForEachADCoord(
- srcView.mapping.arrayDomainSize,
+ srcView.mapping.arrayDimsSize,
numThreads,
[&](auto ad)
{
@@ -97,7 +97,7 @@ void parallel_memcpy(std::byte* dst, const std::byte* src, std::size_t size, std
template <
bool ReadOpt,
- typename ArrayDomain,
+ typename ArrayDims,
typename RecordDim,
std::size_t LanesSrc,
typename BlobType1,
@@ -105,22 +105,22 @@ template <
typename BlobType2>
void aosoa_copy(
const llama::View<
- llama::mapping::AoSoA,
+ llama::mapping::AoSoA,
BlobType1>& srcView,
llama::View<
- llama::mapping::AoSoA,
+ llama::mapping::AoSoA,
BlobType2>& dstView,
std::size_t numThreads = 1)
{
static_assert(decltype(srcView.storageBlobs)::rank == 1);
static_assert(decltype(dstView.storageBlobs)::rank == 1);
- if (srcView.mapping.arrayDomainSize != dstView.mapping.arrayDomainSize)
- throw std::runtime_error{"UserDomain sizes are different"};
+ if (srcView.mapping.arrayDimsSize != dstView.mapping.arrayDimsSize)
+ throw std::runtime_error{"Array dimensions sizes are different"};
const auto flatSize = std::reduce(
- std::begin(dstView.mapping.arrayDomainSize),
- std::end(dstView.mapping.arrayDomainSize),
+ std::begin(dstView.mapping.arrayDimsSize),
+ std::end(dstView.mapping.arrayDimsSize),
std::size_t{1},
std::multiplies<>{});
@@ -199,7 +199,7 @@ template
auto hash(const llama::View& view)
{
std::size_t acc = 0;
- for (auto ad : llama::ArrayDomainIndexRange{view.mapping.arrayDomainSize})
+ for (auto ad : llama::ArrayDimsIndexRange{view.mapping.arrayDimsSize})
llama::forEachLeaf([&](auto coord) { boost::hash_combine(acc, view(ad)(coord)); });
return acc;
}
@@ -209,7 +209,7 @@ auto prepareViewAndHash(Mapping mapping)
auto view = llama::allocView(mapping);
auto value = 0.0f;
- for (auto ad : llama::ArrayDomainIndexRange{mapping.arrayDomainSize})
+ for (auto ad : llama::ArrayDimsIndexRange{mapping.arrayDimsSize})
{
auto p = view(ad);
p(tag::Pos{}, tag::X{}) = value++;
@@ -249,7 +249,7 @@ try
const auto numThreads = static_cast(omp_get_num_threads());
std::cout << "Threads: " << numThreads << "\n";
- const auto userDomain = llama::ArrayDomain{1024, 1024, 16};
+ const auto arrayDims = llama::ArrayDims{1024, 1024, 16};
std::ofstream plotFile{"viewcopy.tsv"};
plotFile.exceptions(std::ios::badbit | std::ios::failbit);
@@ -259,8 +259,8 @@ try
{
std::cout << "AoS -> SoA\n";
plotFile << "\"AoS -> SoA\"\t";
- const auto srcMapping = llama::mapping::AoS{userDomain, Particle{}};
- const auto dstMapping = llama::mapping::SoA{userDomain, Particle{}};
+ const auto srcMapping = llama::mapping::AoS{arrayDims, Particle{}};
+ const auto dstMapping = llama::mapping::SoA{arrayDims, Particle{}};
auto [srcView, srcHash] = prepareViewAndHash(srcMapping);
benchmarkCopy(
@@ -318,8 +318,8 @@ try
{
std::cout << "SoA -> AoS\n";
plotFile << "\"SoA -> AoS\"\t";
- const auto srcMapping = llama::mapping::SoA{userDomain, Particle{}};
- const auto dstMapping = llama::mapping::AoS{userDomain, Particle{}};
+ const auto srcMapping = llama::mapping::SoA{arrayDims, Particle{}};
+ const auto dstMapping = llama::mapping::AoS{arrayDims, Particle{}};
auto [srcView, srcHash] = prepareViewAndHash(srcMapping);
benchmarkCopy(
@@ -387,8 +387,8 @@ try
std::cout << "AoSoA" << LanesSrc << " -> AoSoA" << LanesDst << "\n";
plotFile << "\"AoSoA" << LanesSrc << " -> AoSoA" << LanesDst << "\"\t";
- const auto srcMapping = llama::mapping::AoSoA{userDomain};
- const auto dstMapping = llama::mapping::AoSoA{userDomain};
+ const auto srcMapping = llama::mapping::AoSoA{arrayDims};
+ const auto dstMapping = llama::mapping::AoSoA{arrayDims};
auto [srcView, srcHash] = prepareViewAndHash(srcMapping);
benchmarkCopy(
diff --git a/include/llama/Array.hpp b/include/llama/Array.hpp
index f46bfe83e3..129451b27a 100644
--- a/include/llama/Array.hpp
+++ b/include/llama/Array.hpp
@@ -17,7 +17,7 @@ namespace llama
struct Array
{
static constexpr std::size_t rank
- = N; // FIXME this is right from the ArrayDomain's POV, but wrong from the Array's POV
+ = N; // FIXME this is right from the ArrayDims's POV, but wrong from the Array's POV
T element[N > 0 ? N : 1];
LLAMA_FN_HOST_ACC_INLINE constexpr T* begin()
diff --git a/include/llama/ArrayDomainRange.hpp b/include/llama/ArrayDimsIndexRange.hpp
similarity index 61%
rename from include/llama/ArrayDomainRange.hpp
rename to include/llama/ArrayDimsIndexRange.hpp
index 65fc0c0965..d5350dd7bc 100644
--- a/include/llama/ArrayDomainRange.hpp
+++ b/include/llama/ArrayDimsIndexRange.hpp
@@ -10,19 +10,19 @@
namespace llama
{
- /// Iterator supporting \ref ArrayDomainIndexRange.
+ /// Iterator supporting \ref ArrayDimsIndexRange.
template
- struct ArrayDomainIndexIterator
+ struct ArrayDimsIndexIterator
{
- using value_type = ArrayDomain;
+ using value_type = ArrayDims;
using difference_type = std::ptrdiff_t;
using reference = value_type;
using pointer = internal::IndirectValue;
using iterator_category = std::random_access_iterator_tag;
- constexpr ArrayDomainIndexIterator() noexcept = default;
+ constexpr ArrayDimsIndexIterator() noexcept = default;
- constexpr ArrayDomainIndexIterator(ArrayDomain size, ArrayDomain current) noexcept
+ constexpr ArrayDimsIndexIterator(ArrayDims size, ArrayDims current) noexcept
: lastIndex(
[size]() mutable
{
@@ -34,10 +34,10 @@ namespace llama
{
}
- constexpr ArrayDomainIndexIterator(const ArrayDomainIndexIterator&) noexcept = default;
- constexpr ArrayDomainIndexIterator(ArrayDomainIndexIterator&&) noexcept = default;
- constexpr auto operator=(const ArrayDomainIndexIterator&) noexcept -> ArrayDomainIndexIterator& = default;
- constexpr auto operator=(ArrayDomainIndexIterator&&) noexcept -> ArrayDomainIndexIterator& = default;
+ constexpr ArrayDimsIndexIterator(const ArrayDimsIndexIterator&) noexcept = default;
+ constexpr ArrayDimsIndexIterator(ArrayDimsIndexIterator&&) noexcept = default;
+ constexpr auto operator=(const ArrayDimsIndexIterator&) noexcept -> ArrayDimsIndexIterator& = default;
+ constexpr auto operator=(ArrayDimsIndexIterator&&) noexcept -> ArrayDimsIndexIterator& = default;
constexpr auto operator*() const noexcept -> value_type
{
@@ -49,7 +49,7 @@ namespace llama
return internal::IndirectValue{**this};
}
- constexpr auto operator++() noexcept -> ArrayDomainIndexIterator&
+ constexpr auto operator++() noexcept -> ArrayDimsIndexIterator&
{
for (auto i = (int) Dim - 1; i >= 0; i--)
{
@@ -64,14 +64,14 @@ namespace llama
return *this;
}
- constexpr auto operator++(int) noexcept -> ArrayDomainIndexIterator
+ constexpr auto operator++(int) noexcept -> ArrayDimsIndexIterator
{
auto tmp = *this;
++*this;
return tmp;
}
- constexpr auto operator--() noexcept -> ArrayDomainIndexIterator&
+ constexpr auto operator--() noexcept -> ArrayDimsIndexIterator&
{
for (auto i = (int) Dim - 1; i >= 0; i--)
{
@@ -86,7 +86,7 @@ namespace llama
return *this;
}
- constexpr auto operator--(int) noexcept -> ArrayDomainIndexIterator
+ constexpr auto operator--(int) noexcept -> ArrayDimsIndexIterator
{
auto tmp = *this;
--*this;
@@ -98,7 +98,7 @@ namespace llama
return *(*this + i);
}
- constexpr auto operator+=(difference_type n) noexcept -> ArrayDomainIndexIterator&
+ constexpr auto operator+=(difference_type n) noexcept -> ArrayDimsIndexIterator&
{
for (auto i = (int) Dim - 1; i >= 0 && n != 0; i--)
{
@@ -117,32 +117,29 @@ namespace llama
return *this;
}
- friend constexpr auto operator+(ArrayDomainIndexIterator it, difference_type n) noexcept
- -> ArrayDomainIndexIterator
+ friend constexpr auto operator+(ArrayDimsIndexIterator it, difference_type n) noexcept -> ArrayDimsIndexIterator
{
it += n;
return it;
}
- friend constexpr auto operator+(difference_type n, ArrayDomainIndexIterator it) noexcept
- -> ArrayDomainIndexIterator
+ friend constexpr auto operator+(difference_type n, ArrayDimsIndexIterator it) noexcept -> ArrayDimsIndexIterator
{
return it + n;
}
- constexpr auto operator-=(difference_type n) noexcept -> ArrayDomainIndexIterator&
+ constexpr auto operator-=(difference_type n) noexcept -> ArrayDimsIndexIterator&
{
return operator+=(-n);
}
- friend constexpr auto operator-(ArrayDomainIndexIterator it, difference_type n) noexcept
- -> ArrayDomainIndexIterator
+ friend constexpr auto operator-(ArrayDimsIndexIterator it, difference_type n) noexcept -> ArrayDimsIndexIterator
{
it -= n;
return it;
}
- friend constexpr auto operator-(const ArrayDomainIndexIterator& a, const ArrayDomainIndexIterator& b) noexcept
+ friend constexpr auto operator-(const ArrayDimsIndexIterator& a, const ArrayDimsIndexIterator& b) noexcept
-> difference_type
{
assert(a.lastIndex == b.lastIndex);
@@ -159,21 +156,21 @@ namespace llama
}
friend constexpr auto operator==(
- const ArrayDomainIndexIterator& a,
- const ArrayDomainIndexIterator& b) noexcept -> bool
+ const ArrayDimsIndexIterator& a,
+ const ArrayDimsIndexIterator& b) noexcept -> bool
{
assert(a.lastIndex == b.lastIndex);
return a.current == b.current;
}
friend constexpr auto operator!=(
- const ArrayDomainIndexIterator& a,
- const ArrayDomainIndexIterator& b) noexcept -> bool
+ const ArrayDimsIndexIterator& a,
+ const ArrayDimsIndexIterator& b) noexcept -> bool
{
return !(a == b);
}
- friend constexpr auto operator<(const ArrayDomainIndexIterator& a, const ArrayDomainIndexIterator& b) noexcept
+ friend constexpr auto operator<(const ArrayDimsIndexIterator& a, const ArrayDimsIndexIterator& b) noexcept
-> bool
{
assert(a.lastIndex == b.lastIndex);
@@ -184,55 +181,55 @@ namespace llama
std::end(b.current));
}
- friend constexpr auto operator>(const ArrayDomainIndexIterator& a, const ArrayDomainIndexIterator& b) noexcept
+ friend constexpr auto operator>(const ArrayDimsIndexIterator& a, const ArrayDimsIndexIterator& b) noexcept
-> bool
{
return b < a;
}
- friend constexpr auto operator<=(const ArrayDomainIndexIterator& a, const ArrayDomainIndexIterator& b) noexcept
+ friend constexpr auto operator<=(const ArrayDimsIndexIterator& a, const ArrayDimsIndexIterator& b) noexcept
-> bool
{
return !(a > b);
}
- friend constexpr auto operator>=(const ArrayDomainIndexIterator& a, const ArrayDomainIndexIterator& b) noexcept
+ friend constexpr auto operator>=(const ArrayDimsIndexIterator& a, const ArrayDimsIndexIterator& b) noexcept
-> bool
{
return !(a < b);
}
private:
- ArrayDomain lastIndex;
- ArrayDomain current;
+ ArrayDims lastIndex;
+ ArrayDims current;
};
- /// Range allowing to iterate over all indices in a \ref ArrayDomain.
+ /// Range allowing to iterate over all indices in a \ref ArrayDims.
template
- struct ArrayDomainIndexRange
+ struct ArrayDimsIndexRange
#if CAN_USE_RANGES
: std::ranges::view_base
#endif
{
- constexpr ArrayDomainIndexRange() noexcept = default;
+ constexpr ArrayDimsIndexRange() noexcept = default;
- constexpr ArrayDomainIndexRange(ArrayDomain size) noexcept : size(size)
+ constexpr ArrayDimsIndexRange(ArrayDims size) noexcept : size(size)
{
}
- constexpr auto begin() const noexcept -> ArrayDomainIndexIterator
+ constexpr auto begin() const noexcept -> ArrayDimsIndexIterator
{
- return {size, ArrayDomain{}};
+ return {size, ArrayDims{}};
}
- constexpr auto end() const noexcept -> ArrayDomainIndexIterator
+ constexpr auto end() const noexcept -> ArrayDimsIndexIterator
{
- auto endPos = ArrayDomain{};
+ auto endPos = ArrayDims{};
endPos[0] = size[0];
return {size, endPos};
}
private:
- ArrayDomain size;
+ ArrayDims size;
};
} // namespace llama
diff --git a/include/llama/Concepts.hpp b/include/llama/Concepts.hpp
index ff276c0c13..4f9e097af9 100644
--- a/include/llama/Concepts.hpp
+++ b/include/llama/Concepts.hpp
@@ -13,12 +13,12 @@ namespace llama
// clang-format off
template
concept Mapping = requires(M m) {
- typename M::ArrayDomain;
+ typename M::ArrayDims;
typename M::RecordDim;
{ M::blobCount } -> std::convertible_to;
Array{}; // validates constexpr-ness
{ m.blobSize(std::size_t{}) } -> std::same_as;
- { m.blobNrAndOffset(typename M::ArrayDomain{}) } -> std::same_as;
+ { m.blobNrAndOffset(typename M::ArrayDims{}) } -> std::same_as;
};
// clang-format on
diff --git a/include/llama/Core.hpp b/include/llama/Core.hpp
index c5164e1978..72f34f5627 100644
--- a/include/llama/Core.hpp
+++ b/include/llama/Core.hpp
@@ -18,29 +18,28 @@ namespace llama
{
};
- /// The run-time specified array domain.
- /// \tparam Dim compile time dimensionality of the array domain
+ /// The run-time specified array dimensions.
+ /// \tparam Dim Compile-time number of dimensions.
template
- struct ArrayDomain : Array
+ struct ArrayDims : Array
{
};
- static_assert(
- std::is_trivially_default_constructible_v>); // so ArrayDomain<1>{} will produce a zeroed
- // coord. Should hold for all dimensions,
- // but just checking for <1> here.
+ static_assert(std::is_trivially_default_constructible_v>); // so ArrayDims<1>{} will produce a zeroed
+ // coord. Should hold for all dimensions,
+ // but just checking for <1> here.
template
- ArrayDomain(Args...) -> ArrayDomain;
+ ArrayDims(Args...) -> ArrayDims;
} // namespace llama
template
-struct std::tuple_size> : std::integral_constant
+struct std::tuple_size> : std::integral_constant
{
};
template
-struct std::tuple_element>
+struct std::tuple_element>
{
using type = size_t;
};
@@ -428,9 +427,9 @@ namespace llama
namespace internal
{
template
- constexpr auto popFront(ArrayDomain ad)
+ constexpr auto popFront(ArrayDims ad)
{
- ArrayDomain result;
+ ArrayDims result;
for (std::size_t i = 0; i < Dim - 1; i++)
result[i] = ad[i + 1];
return result;
@@ -438,14 +437,14 @@ namespace llama
} // namespace internal
template
- void forEachADCoord(ArrayDomain adSize, Func&& func, OuterIndices... outerIndices)
+ void forEachADCoord(ArrayDims adSize, Func&& func, OuterIndices... outerIndices)
{
for (std::size_t i = 0; i < adSize[0]; i++)
{
if constexpr (Dim > 1)
forEachADCoord(internal::popFront(adSize), std::forward(func), outerIndices..., i);
else
- std::forward(func)(ArrayDomain{outerIndices..., i});
+ std::forward(func)(ArrayDims{outerIndices..., i});
}
}
diff --git a/include/llama/DumpMapping.hpp b/include/llama/DumpMapping.hpp
index d14fea2d07..44bd21bdb4 100644
--- a/include/llama/DumpMapping.hpp
+++ b/include/llama/DumpMapping.hpp
@@ -2,7 +2,7 @@
#pragma once
-#include "ArrayDomainRange.hpp"
+#include "ArrayDimsIndexRange.hpp"
#include "Core.hpp"
#include
@@ -56,8 +56,8 @@ namespace llama
return v;
}
- template
- auto mappingBlobNrAndOffset(const Mapping& mapping, const ArrayDomain& adCoord, RecordCoord)
+ template
+ auto mappingBlobNrAndOffset(const Mapping& mapping, const ArrayDims& adCoord, RecordCoord)
{
return mapping.template blobNrAndOffset(adCoord);
}
@@ -72,7 +72,7 @@ namespace llama
}
template
- auto formatUdCoord(const llama::ArrayDomain& coord)
+ auto formatUdCoord(const llama::ArrayDims& coord)
{
if constexpr (Dim == 1)
return std::to_string(coord[0]);
@@ -105,7 +105,7 @@ namespace llama
template
struct FieldBox
{
- ArrayDomain adCoord;
+ ArrayDims adCoord;
std::vector recordCoord;
std::vector recordTags;
NrAndOffset nrAndOffset;
@@ -115,12 +115,12 @@ namespace llama
template
auto boxesFromMapping(const Mapping& mapping)
{
- using ArrayDomain = typename Mapping::ArrayDomain;
+ using ArrayDims = typename Mapping::ArrayDims;
using RecordDim = typename Mapping::RecordDim;
- std::vector> infos;
+ std::vector> infos;
- for (auto adCoord : ArrayDomainIndexRange{mapping.arrayDomainSize})
+ for (auto adCoord : ArrayDimsIndexRange{mapping.arrayDimsSize})
{
forEachLeaf(
[&](auto coord)
diff --git a/include/llama/Proofs.hpp b/include/llama/Proofs.hpp
index 5fae7471af..0ba8be9d8a 100644
--- a/include/llama/Proofs.hpp
+++ b/include/llama/Proofs.hpp
@@ -6,15 +6,15 @@
// std::allocator
#ifdef __cpp_constexpr_dynamic_alloc
-# include "ArrayDomainRange.hpp"
+# include "ArrayDimsIndexRange.hpp"
# include "Core.hpp"
namespace llama
{
namespace internal
{
- template
- constexpr auto blobNrAndOffset(const Mapping& m, llama::RecordCoord, ArrayDomain ad)
+ template
+ constexpr auto blobNrAndOffset(const Mapping& m, llama::RecordCoord, ArrayDims ad)
{
return m.template blobNrAndOffset(ad);
}
@@ -49,8 +49,8 @@ namespace llama
};
} // namespace internal
- // Proofs by exhaustion of the array domain and record dimension, that all values mapped to memory do not overlap.
- // Unfortunately, this only works for smallish array domains, because of compiler limits on constexpr evaluation
+ // Proofs by exhaustion of the array and record dimensions, that all values mapped to memory do not overlap.
+ // Unfortunately, this only works for smallish array dimensions, because of compiler limits on constexpr evaluation
// depth.
template
constexpr auto mapsNonOverlappingly(const Mapping& m) -> bool
@@ -74,7 +74,7 @@ namespace llama
{
if (collision)
return;
- for (auto ad : llama::ArrayDomainIndexRange{m.arrayDomainSize})
+ for (auto ad : llama::ArrayDimsIndexRange{m.arrayDimsSize})
{
using Type
= llama::GetType;
diff --git a/include/llama/View.hpp b/include/llama/View.hpp
index 3606f4b803..5a75e68c5a 100644
--- a/include/llama/View.hpp
+++ b/include/llama/View.hpp
@@ -54,11 +54,11 @@ namespace llama
}
/// Allocates a \ref View holding a single record backed by stack memory (\ref bloballoc::Stack).
- /// \tparam Dim Dimension of the \ref ArrayDomain of the \ref View.
+ /// \tparam Dim Dimension of the \ref ArrayDims of the \ref View.
template
LLAMA_FN_HOST_ACC_INLINE auto allocViewStack() -> decltype(auto)
{
- using Mapping = llama::mapping::One, RecordDim>;
+ using Mapping = llama::mapping::One, RecordDim>;
return allocView(Mapping{}, llama::bloballoc::Stack>{});
}
@@ -315,12 +315,10 @@ namespace llama
isDirectListInitializableFromTuple> = isDirectListInitializable;
} // namespace internal
- /// Virtual record type returned by \ref View after resolving an array domain
- /// coordinate or partially resolving a \ref RecordCoord. A virtual record
- /// does not hold data itself (thus named "virtual"), it just binds enough
- /// information (array domain coord and partial record coord) to retrieve it
- /// from a \ref View later. Virtual records should not be created by the
- /// user. They are returned from various access functions in \ref View and
+ /// Virtual record type returned by \ref View after resolving an array dimensions coordinate or partially resolving
+ /// a \ref RecordCoord. A virtual record does not hold data itself (thus named "virtual"), it just binds enough
+ /// information (array dimensions coord and partial record coord) to retrieve it from a \ref View later. Virtual
+ /// records should not be created by the user. They are returned from various access functions in \ref View and
/// VirtualRecord itself.
template
struct VirtualRecord
@@ -328,10 +326,10 @@ namespace llama
using View = T_View; ///< View this virtual record points into.
private:
- using ArrayDomain = typename View::Mapping::ArrayDomain;
+ using ArrayDims = typename View::Mapping::ArrayDims;
using RecordDim = typename View::Mapping::RecordDim;
- const ArrayDomain userDomainPos;
+ const ArrayDims arrayDimsCoord;
std::conditional_t view;
public:
@@ -342,15 +340,15 @@ namespace llama
LLAMA_FN_HOST_ACC_INLINE VirtualRecord()
/* requires(OwnView) */
- : userDomainPos({})
+ : arrayDimsCoord({})
, view{allocViewStack<1, RecordDim>()}
{
static_assert(OwnView, "The default constructor of VirtualRecord is only available if the ");
}
LLAMA_FN_HOST_ACC_INLINE
- VirtualRecord(ArrayDomain userDomainPos, std::conditional_t view)
- : userDomainPos(userDomainPos)
+ VirtualRecord(ArrayDims arrayDimsCoord, std::conditional_t view)
+ : arrayDimsCoord(arrayDimsCoord)
, view{static_cast(view)}
{
}
@@ -369,12 +367,12 @@ namespace llama
if constexpr (isRecord>)
{
LLAMA_FORCE_INLINE_RECURSIVE
- return VirtualRecord{userDomainPos, this->view};
+ return VirtualRecord{arrayDimsCoord, this->view};
}
else
{
LLAMA_FORCE_INLINE_RECURSIVE
- return this->view.accessor(userDomainPos, AbsolutCoord{});
+ return this->view.accessor(arrayDimsCoord, AbsolutCoord{});
}
}
@@ -386,12 +384,12 @@ namespace llama
if constexpr (isRecord