Skip to content

Commit

Permalink
Mention that the methods were introduced in chapter 3
Browse files Browse the repository at this point in the history
  • Loading branch information
jgurhem committed Jan 15, 2021
1 parent eda78c7 commit 455e51f
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion chapters/exp_dense.tex
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ \chapter{Task-Based, Parallel and Distributed Dense Linear Algebra Applications
\graphicspath{{chapters/exp_dense/}}

The task based programming models used to make experiments in this dissertations have been introduced and selected in the previous chapter.
In this chapter, we introduce the dependency graphs of the block based dense linear algebra algorithms to solve linear systems.
In this chapter, we introduce the dependency graphs of the block based dense linear algebra algorithms to solve linear systems previously introduced in Chapter \ref{chap:methods}.
Afterwards, these graphs are converted into with YML+XMP applications that are used to perform experiments on the K computer.
Then, we focus on the block based LU factorization and implement it with YML+XMP, PaRSEC, Regent and HPX due to the fact that implementing the three algorithms to solve linear systems with every task based programming model would have taken too much time and is not the purpose of this dissertation.
We perform experiments to compare and analyze the performances we obtain with the different task based implementations.
Expand Down
2 changes: 1 addition & 1 deletion chapters/exp_sparse.tex
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ \chapter{Task-Based, Parallel and Distributed Sparse Linear Algebra Applications
Sequences of sparse matrix products are important and largely used in several applications such as iterative methods and neural network training.
However, two executions of the sparse matrix vector are enough to outline the algorithmic issues without having to perform too much computations.
Therefore, the sparse operation $A(Ax+x)$ is considered as it uses two times the sparse matrix vector product.
We implement these algorithms with the selected task based programming models and perform numerical experiments on several clusters and supercomputers.
We implement the sparse matrix vector algorithms previously introduced in Chapter \ref{chap:methods} with the selected task based programming models and perform numerical experiments on several clusters and supercomputers.
Finally, we discuss the results obtained in the experiments.


Expand Down

0 comments on commit 455e51f

Please sign in to comment.