Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
53c4e55
start of developer documents file
nathanielpritchard Feb 10, 2025
f82346b
correct issue in make file
nathanielpritchard Feb 10, 2025
95b0cd8
checklist
Tuna2222 Jun 25, 2025
64760f1
Checklist saperation
Tuna2222 Jul 2, 2025
1633503
dev
Tuna2222 Jul 2, 2025
c6de884
small changes
Tuna2222 Jul 23, 2025
f7a284f
yml
Tuna2222 Aug 6, 2025
0d10ada
docs
Tuna2222 Aug 13, 2025
1d5ac70
123
Tuna2222 Aug 13, 2025
ffba348
initialized the introduction
nathanielpritchard Aug 23, 2025
2c54d6a
slight intro edits
nathanielpritchard Aug 24, 2025
ff88338
removed redundant key function sec
nathanielpritchard Aug 24, 2025
6708ae9
rewrite in html
Tuna2222 Aug 24, 2025
63a9479
Add md with no Julia light
Tuna2222 Aug 24, 2025
c557f17
Only left the version with HTML and comment out others
Tuna2222 Aug 27, 2025
25f4099
123
Tuna2222 Sep 24, 2025
7c3fb87
merge
Tuna2222 Sep 24, 2025
7b0534d
Merge branch 'main' into v0.2-dev_docs
Tuna2222 Sep 24, 2025
6226f6b
merge main
Tuna2222 Sep 24, 2025
e83a636
0.24
Tuna2222 Sep 24, 2025
e5b94a9
docs/getting_started.md changes
Tuna2222 Oct 1, 2025
99fa32a
comment out CI.yml
Tuna2222 Oct 1, 2025
bba815a
delete md
Tuna2222 Oct 1, 2025
b6760bd
Merge branch 'main' into v0.2-tutorials
Tuna2222 Oct 1, 2025
0827c79
delete CI
Tuna2222 Oct 8, 2025
ff201f0
pr template
Tuna2222 Oct 8, 2025
43bec29
Update docs/src/tutorials/getting_started.md
Tuna2222 Oct 8, 2025
3198fad
Update docs/src/dev/style_guide.md
Tuna2222 Oct 8, 2025
c6caee7
Update docs/src/manual/introduction.md
Tuna2222 Oct 8, 2025
45f413f
Update docs/src/tutorials/getting_started.md
Tuna2222 Oct 8, 2025
2b5de08
docs/make pr non-overlap
Tuna2222 Oct 8, 2025
80129e4
Merge branch 'v0.2-tutorials' of https://github.com/numlinalg/RLinear…
Tuna2222 Oct 8, 2025
34c0dcc
docs/refine LS example
Tuna2222 Oct 8, 2025
1909a3e
docs/tutorials/intro
Tuna2222 Oct 8, 2025
80aeb6b
docs/purify the branch
Tuna2222 Oct 15, 2025
c79ffec
Merge branch 'main' into v0.2-tutorials
Tuna2222 Oct 15, 2025
c359f9d
docs/reformulate
Tuna2222 Oct 15, 2025
20fea52
docs/reformulate
Tuna2222 Oct 15, 2025
8c51331
docs/reformulate
Tuna2222 Oct 15, 2025
543f3e5
docs/reformulate
Tuna2222 Oct 16, 2025
8863bcb
docs/tutorial
Tuna2222 Oct 22, 2025
569da4e
Merge branch 'main' into v0.2-tutorials
Tuna2222 Oct 22, 2025
52d8322
10.29
Tuna2222 Oct 29, 2025
f492568
docs: fold the menu
Tuna2222 Nov 5, 2025
4b85022
docs/copilot
Tuna2222 Nov 5, 2025
8602044
docs/least_squares
Tuna2222 Nov 5, 2025
7a7732b
docs/change title
Tuna2222 Nov 5, 2025
c0362dc
Merge branch 'v0.2-tutorials' into v0.2-least_squares_tutorials
Tuna2222 Nov 5, 2025
b3d7dbb
docs/least_squares_short_introduction
Tuna2222 Nov 5, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 2 additions & 5 deletions .github/workflows/Documenter.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,12 @@ name: Documenter
on:
push:
branches:
- master
- main
- v0.2-main
tags: '*'
tags:
- 'v*'
pull_request:
branches:
- master
- main
- v0.2-main
jobs:
build:
runs-on: ubuntu-latest
Expand Down
15 changes: 11 additions & 4 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,17 @@ makedocs(
modules = [RLinearAlgebra],
pages = [
"Home" => "index.md",
# "Tutorials" => [
# "Introduction" => "tutorials/introduction.md",
# "Getting started" => "tutorials/getting_started.md"
# ],
"Tutorials" => [
"Introduction" => "tutorials/introduction.md",
"Consistent Linear System" => [
"tutorials/consistent_system/consistent_system.md",
"tutorials/consistent_system/consistent_system_configure.md",
],
"Least Squares" => [
"tutorials/least_squares/least_squares.md",
"tutorials/least_squares/least_squares_configure.md",
],
],
"Manual" => [
"Introduction" => "manual/introduction.md",
"Compression" => "manual/compression.md",
Expand Down
50 changes: 50 additions & 0 deletions docs/src/tutorials/consistent_system/consistent_system.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# Solving a Consistent Linear System

This guide demonstrates how to use `RLinearAlgebra.jl` package to solve a
**consistent linear system**---a system where at least one solution
exists---expressed in the form:

$$Ax = b.$$

We'll walk through setting up the problem, using a solver, and verifying the result.


First, let's define our linear system $Ax = b$ with some known solution `x_true`.


```julia
using LinearAlgebra
num_rows, num_cols = 1000, 50;
A = randn(Float64, num_rows, num_cols);
x_true = randn(Float64, num_cols);
b = A * x_true;
```

```@setup ConsistentExample
using LinearAlgebra
num_rows, num_cols = 1000, 50;
A = randn(Float64, num_rows, num_cols);
x_true = randn(Float64, num_cols);
b = A * x_true;
```
`RLinearAlgebra.jl` can solve this system in just a few lines of code:

```@example ConsistentExample
using RLinearAlgebra
solver = Kaczmarz(log = BasicLogger(max_it = 300))
solution = zeros(Float64, num_cols)
rsolve!(solver, solution, A, b)
```
`rsolve!` puts the solution in the vector `solution`.












257 changes: 257 additions & 0 deletions docs/src/tutorials/consistent_system/consistent_system_configure.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,257 @@
# Modular Customization: Beyond Defaults

In the previous guide, we showed how to solve a consistent linear system in just a few
lines of code. That example used the default configurations of the
[`Kaczmarz` solver](@ref Kaczmarz) solver, which is highly effective for many standard
problems.

However, the true power of **RLinearAlgebra.jl** lies in its high degree of modularity
and flexibility. You can fine-tune the solver's behavior by combining different
ingredients, like cooking a fine dish, to tackle specific challenges, improve
performance, or implement more complex algorithms.



We will follow the design philosophy of **RLinearAlgebra.jl** by composing different
modules ([`Compressor`](@ref Compressor), [`Logger`](@ref Logger), [`Solver`](@ref Solver),
etc.) to customize a solver and solve the same consistent linear system,

$$Ax = b.$$


```@setup ConsistentExample
# Import relevant libraries
using RLinearAlgebra, LinearAlgebra


# Define the dimensions of the linear system
num_rows, num_cols = 1000, 50

# Create the matrix A and a known true solution x_true
A = randn(Float64, num_rows, num_cols);
x_true = randn(Float64, num_cols);

# Calculate the right-hand side vector b from A and x_true
b = A * x_true;
```


---
## Configure [`Compressor`](@ref Compressor)

For large-scale problems, the matrix $A$ can be massive, slowing down iterative algorithms.
We can use a randomized sketching technique to "compress" $A$ and $b$ to a lower dimension
while preserving the essential information of the system, s.t. we can solve the system
fast without the loss of accuracy.

Here, we'll configure a [`SparseSign`](@ref SparseSign) compressor as an example.
This compressor generates a sparse matrix $S$, whose non-zero elements are +1 or -1
(with scaling).

### Configure the [`SparseSign`](@ref SparseSign) Compressor

We will configure a compression matrix that reduces the 1000 rows of our original
system down to a more manageable $30$ rows.

```@example ConsistentExample
# The goal is to compress the 1000 rows of A to 30 rows
compression_dim = 30
# We want each row of the compression matrix S to have 5 non-zero elements
non_zeros = 5

# Configure the SparseSign compressor
sparse_compressor = SparseSign(
cardinality=Left(), # S will be left-multiplied: SAx = Sb
compression_dim=compression_dim, # The compressed dimension (number of rows)
nnz=non_zeros, # The number of non-zero elements per row in S
type=Float64 # The element type for the compression matrix
)
```

If the compression dimension of `30` rows is considered too large, it can be changed to `10` by updating the compressor configuration and rebuilding the recipe as follows:

```julia
# Change the dimension of the compressor. Similarly, you can use the same idea
# for other configurations' changes.
sparse_compressor.compression_dim = 10;
```

```@setup ConsistentExample
sparse_compressor.compression_dim = 10;
```

The `sparse_compressor` is containing all the [`SparseSign`](@ref SparseSign)
configurations that we need.

---
### (Optional) Build [SparseSignRecipe](@ref SparseSignRecipe) and apply it to the system

While the solver can use the `sparse_compressor` to perform the compression method
on-the-fly, we can stop here to configure other "ingredients". However,
it can sometimes be useful to form the compression matrix and the compressed
system explicitly to get an idea of your compression matrix. Therefore, we will
continue playing with it.

After defining the compressor's parameters, we combine it with our matrix $A$ to
create a [`SparseSignRecipe`](@ref SparseSignRecipe). This "recipe"
pre-calculates the sparse matrix `S` and prepares everything needed for
efficient compression.

```@example ConsistentExample
# Pass the compressor configuration and the original matrix A to
# create the final compression recipe.
S = complete_compressor(sparse_compressor, A)

# You can closely look at the compression recipe you created.
println("Configurations of compression matrix:")
println(" - Compression matrix is applied to left or right: ", S.cardinality)
println(" - Compression matrix's number of rows: ", S.n_rows)
println(" - Compression matrix's number of columns: ", S.n_cols)
println(" - The number of nonzeros in each column (left)/row (right) of compression matrix: ", S.nnz)
println(" - Compression matrix's nonzero entry values: ", S.scale)
println(" - Compression matrix: ", typeof(S.op), size(S.op))
```
We can use `*` to apply this sparse matrix `S` to the system.

```@example ConsistentExample
# Form the compressed system SAx = Sb
SA = S * A
Sb = S * b

println("Dimensions of the compressed system:")
println(" - Matrix SA: ", size(SA))
println(" - Vector Sb: ", size(Sb))
```


---
## Configure [`Logger`](@ref Logger)

To monitor the solver and control its execution, we will configure a
[`BasicLogger`](@ref BasicLogger) .
This object serves two purposes: tracking metrics (like the error history) and
defining stopping rules.

We'll configure it to stop after a maximum of `500` iterations or if the residual
error drops below `1e-6`. We will also set `collection_rate = 5` to record the
error every $5$ iterations.

```@example ConsistentExample
# Configure the logger to control the solver's execution
logger = BasicLogger(
max_it = 500,
threshold = 1e-6,
collection_rate = 5
)
```

### (Optional) Build the [BasicLoggerRecipe](@ref BasicLoggerRecipe)

Just like the compressor, this `logger` is just a set of instructions.
To make it "ready" to store data, we could call [complete_logger](@ref complete_logger)
on it:


```@example ConsistentExample
# We can create the recipe manually, though this is rarely needed
logger_recipe = complete_logger(logger)
```
This `logger_recipe` is the object that actually contains the hist vector for
storing the error history, the current iteration, and the converged status.

Again, you almost never need to call `complete_logger` yourself.
Because the solver (which we will configure next) or the [`rsolve!`](@ref rsolve!)
(the function that solves the system)
handles it for us. When we call [`complete_solver`](@ref complete_solver) or the
[`rsolve!`](@ref rsolve!), it will automatically find the
`logger` inside the solver, call `complete_logger` on it,
and store the resulting `logger_recipe` inside the final `solver_recipe`.

---




## Configure [`Solver`](@ref Solver)

Now we assemble our configured ingredients—the `sparse_compressor` and the `logger`—into
the main [`Kaczmarz` solver](@ref Kaczmarz) object. For any component we don't specify,
a default will be used. Here, we'll explicitly specify the [LQSolver](@ref LQSolver)
as our sub-solver.

### Configure the [`Kaczmarz` solver](@ref Kaczmarz)
```@example ConsistentExample
# Create the Kaczmarz solver object by passing in the ingredients
kaczmarz_solver = Kaczmarz(
compressor = sparse_compressor,
log = logger,
sub_solver = LQSolver()
)
```

### (Optional) create the [SolverRecipe](@ref SolverRecipe)

This is the step where everything comes together.
Just as with the compressor, when we call
[`complete_solver`](@ref complete_solver), it takes the
`kaczmarz_solver` configurations and the problem data, and then:

1. Pre-allocates memory for everything needed in the algorithm.

2. Finds the `sparse_compressor` config inside `kaczmarz_solver` and calls `complete_compressor` to create the `SparseSignRecipe`.

3. Finds the `logger` inside `kaczmarz_solver` and calls `complete_logger` to create the `BasicLoggerRecipe`.

4. Bundles all these "recipes" into a single, ready-to-use solver_recipe object.

```@example ConsistentExample
# Set the solution vector x (typically a zero vector)
solution = zeros(Float64, num_cols);
# Create the solver recipe by combining the solver and the problem data
solver_recipe = complete_solver(kaczmarz_solver, solution, A, b);
```
However, again, this step can also be skipped and directly pass the
solver config `kaczmarz_solver` into the function that can solve the system.



---
## Solve and Verify the Result

### Solve the System

We call [`rsolve!`](@ref rsolve!) to run the [`Kaczmarz` solver](@ref Kaczmarz).
The `!` in the name indicates that the function modifies its arguments in-place.
Here, `solution` will be updated with the solution vector.
The algorithm will run until a stopping criterion from our
`logger` is met.

```@example ConsistentExample
# Set the solution vector x (typically a zero vector)
solution = zeros(Float64, num_cols);

# Run the solver!
_, solver_history = rsolve!(kaczmarz_solver, solution, A, b);

# The solution is now stored in the updated solution vector
solution
```

### Verify the result

Finally, let's check how close our calculated solution is to the known
`x_true` and inspect the `logger` to see how the solver performed.

```@example ConsistentExample
# We can inspect the logger's history to see the convergence
error_history = solver_history.log.hist;
println(" - Solver stopped at iteration: ", solver_history.log.iteration)
println(" - Final residual error, ||Ax-b||_2: ", error_history[solver_history.log.record_location])

# Calculate the norm of the error
error_norm = norm(solution - x_true)
println(" - Norm of the error between the solution and x_true: ", error_norm)
```

As you can see, by composing different modules, we configured a randomized
solver that found a solution vector very close to the true solution.
14 changes: 14 additions & 0 deletions docs/src/tutorials/introduction.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Introduction

The purpose of these tutorials is to use examples to help new users quickly get hands-on
experience using `RLinearAlgebra.jl`.

## How tutorials are structured

Problem sets:
- Consistent system problem, solving $x$ s.t. $$Ax = b$$.
- Least squares problem, solving $x = \arg\displaystyle\min_{x}\|Ax - b\|_2^2$.




Loading
Loading