Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batch mode #552

Open
vboussange opened this issue Oct 22, 2024 · 4 comments
Open

Batch mode #552

vboussange opened this issue Oct 22, 2024 · 4 comments

Comments

@vboussange
Copy link

Hey there,
I am trying to optimise the ConScape.jl package, a library for ecological connectivity analysis. The computational bottleneck relates to these lines, where a large linear problem is solved.

https://github.com/ConScape/ConScape.jl/blob/c6d8e3d82a26f181c4f734f5d34e6dcd1b3dc99b/src/gridrsp.jl#L21-L25

I'd love to be able to try some of LinearSolve.jl solvers for this task, but it requires batch mode, a feature that is supported by \:

A = randn(4,4)
B = randn(4,4)
A\B

but not by LinearSolve.jl

prob = LinearProblem(A, B)
sol = solve(prob)

I have suggested this feature on Slack in July, but I make it official here. @amontoison raised the point that other SciML users could be interested, and had some idea on how to implement it.
It is not in my capacity to investigate this, but maybe one of the developers here could have a look at it?
Cheers!

@ChrisRackauckas
Copy link
Member

Yeah we really need to do this at some point.

@avik-pal you setup a batch mode for SimpleGMRES at one time? Is that code still around?

@avik-pal
Copy link
Member

That is a different kind of batching. It is essentially $A x = b$, where $A$ is N x M x B and $b$ is N x B, so $x$ is M x B. Here we have $A$ -- N x M and $b$ is N x B

@amontoison
Copy link
Contributor

amontoison commented Oct 28, 2024

@avik-pal I reviewed SimpleGMRES again, but I’m still not entirely sure what it's doing.
The docstring mentions it’s optimized for block diagonal matrices, but there’s no mention of batching.
If there is batching, I assume it means solving multiple systems A_i x_i = b_i simultaneously, if I understand the problem correctly.

It does make sense to combine some operations to leverage BLAS2/BLAS3 instead of BLAS1.
However, this isn’t a “tensor-like” Krylov method, which is a different approach altogether (never implemented to the best of my knowledge).
If all matrices A_i are the same, you should avoid this and instead use a block-Krylov method.
You will directly see the difference in terms of performance because you seek the solutions in one block-Krylov subspace instead instead of multiple Krylov subspaces.

I quickly opened PR #554 in case it’s helpful.

@ChrisRackauckas
Copy link
Member

No, that's the batching @avik-pal was talking about. The most normal batching is Ax=B where B is a matrix. A\B is a natural solve for all vectors. But that is kind of an odd API to linear solve because A isn't actually a linear operator for B, it's a linear operator for slices of B. The current API of LinearSolve is that A is supposed to be an operator for B, so for a matrix it's essentially Ax=vec(B). This is somewhat required because otherwise operations on arbitrary types does not make much sense in a general way.

But it is natural to want the B batching. I think we'd need a BatchedLinearProblem to signify it at the type level and setup solvers for that, since you need type information to know the correct interpretation at compile time. batch=Val(true) in LinearProblem could do it too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants