Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Behavior when the linear solver fails #71

Closed
gdalle opened this issue Jul 30, 2023 · 18 comments · Fixed by #83
Closed

Behavior when the linear solver fails #71

gdalle opened this issue Jul 30, 2023 · 18 comments · Fixed by #83
Labels
feature New feature or request
Milestone

Comments

@gdalle
Copy link
Member

gdalle commented Jul 30, 2023

At the moment we throw an error, can we do better? NaNs maybe?

@thorek1
Copy link
Contributor

thorek1 commented Jul 30, 2023

returning a flag (Bool) indicating that the linear solver failed would be more helpful. catching errors with try/catch is inefficient/costly

@thorek1
Copy link
Contributor

thorek1 commented Jul 30, 2023

Actually this is making my tests fail in the branch where I try to use the latest ImplicitDifferentiation.jl v0.4.4

@gdalle
Copy link
Member Author

gdalle commented Jul 30, 2023

returning a flag (Bool) indicating that the linear solver failed would be more helpful. catching errors with try/catch is inefficient/costly

Yeah but where should we return it? The linear solve happens deeeep inside the call stack, and when you do Zygote.jacobian(implicit, x) there is no natural way to send a flag back to the surface, is there?

@mohamed82008
Copy link
Collaborator

let's just propagate NaNs all the way up

@thorek1
Copy link
Contributor

thorek1 commented Jul 31, 2023

What about adding the flag as the last element of the returned values like they do with the linear solvers (krylov at least)

@mohamed82008
Copy link
Collaborator

We still have to return something at the end of the correct type. Instead of returning garbage, let's just return a vector of NaNs. This is consistent with 0 / 0 which doesn't throw but returns NaN.

@thorek1
Copy link
Contributor

thorek1 commented Jul 31, 2023

As of now he returns whatever krylov finds (eg least squares solution), which is fine as long as you know it didn't solve. Returning NaNs should be fine as well. I hope it doesn't error before I can catch them.

@gdalle
Copy link
Member Author

gdalle commented Jul 31, 2023

I don't know how to do this in a type-stable way, unless we promote everything to Float64, which is the type of NaN. And even then, if we replace the lu(Matrix(A)) of the direct linear solver with A or fill(NaN, size(A)) we have a type instability

@mohamed82008
Copy link
Collaborator

convert(float(T), NaN) does what you expect

@gdalle
Copy link
Member Author

gdalle commented Jul 31, 2023

Ok, how about the LU presolve?

@mohamed82008
Copy link
Collaborator

You can use

lu(A_static, check = false)

Then in solve, call issuccess on the output of lu to find out if the LU succeeded. If it didn't return NaNs.

@gdalle
Copy link
Member Author

gdalle commented Jul 31, 2023

Cool, gonna try that

@thorek1
Copy link
Contributor

thorek1 commented Aug 2, 2023

on a related note, if i understand your current ForwardDiffext implementation of the Krylov solver correctly, you do it for each set of partials, right? isn't that terribly redundant? wouldn't it be better to either do it the same way you do it for the direct solver: invert A in the presolve and use that later in the actual solve or just put all the partials in one long vector and run the solver once?

@gdalle
Copy link
Member Author

gdalle commented Aug 2, 2023

That's a good point. With a presolve I don't think it changes much but we probably would benefit from putting all partials in a single matrix nonetheless

@thorek1
Copy link
Contributor

thorek1 commented Aug 3, 2023

In my view this is the most pressing issue. When using Turing.jl with ImplicitDifferentiation v0.4.4 the errors trip up the sampler, rendering sampling impossible. There is no way of knowing upfront if A is invertible outside of ImplicitDifferentiation. The best solution in my view is to return NaN.

@gdalle
Copy link
Member Author

gdalle commented Aug 4, 2023

Gonna give it a try today

@gdalle gdalle added this to the v0.5 milestone Aug 4, 2023
@gdalle gdalle linked a pull request Aug 4, 2023 that will close this issue
@gdalle gdalle closed this as completed in #83 Aug 4, 2023
@thorek1
Copy link
Contributor

thorek1 commented Aug 4, 2023

tested and works. many thanks

@gdalle
Copy link
Member Author

gdalle commented Aug 4, 2023

cool! I'm currently doing a sprint on the other issues, hope to get the release out sometime this weekend

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants