-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Behavior when the linear solver fails #71
Comments
returning a flag (Bool) indicating that the linear solver failed would be more helpful. catching errors with |
Actually this is making my tests fail in the branch where I try to use the latest ImplicitDifferentiation.jl v0.4.4 |
Yeah but where should we return it? The linear solve happens deeeep inside the call stack, and when you do |
let's just propagate |
What about adding the flag as the last element of the returned values like they do with the linear solvers (krylov at least) |
We still have to return something at the end of the correct type. Instead of returning garbage, let's just return a vector of NaNs. This is consistent with |
As of now he returns whatever krylov finds (eg least squares solution), which is fine as long as you know it didn't solve. Returning NaNs should be fine as well. I hope it doesn't error before I can catch them. |
I don't know how to do this in a type-stable way, unless we promote everything to |
|
Ok, how about the LU presolve? |
You can use lu(A_static, check = false) Then in |
Cool, gonna try that |
on a related note, if i understand your current ForwardDiffext implementation of the Krylov solver correctly, you do it for each set of partials, right? isn't that terribly redundant? wouldn't it be better to either do it the same way you do it for the direct solver: invert A in the presolve and use that later in the actual solve or just put all the partials in one long vector and run the solver once? |
That's a good point. With a presolve I don't think it changes much but we probably would benefit from putting all partials in a single matrix nonetheless |
In my view this is the most pressing issue. When using Turing.jl with ImplicitDifferentiation v0.4.4 the errors trip up the sampler, rendering sampling impossible. There is no way of knowing upfront if A is invertible outside of ImplicitDifferentiation. The best solution in my view is to return NaN. |
Gonna give it a try today |
tested and works. many thanks |
cool! I'm currently doing a sprint on the other issues, hope to get the release out sometime this weekend |
At the moment we throw an error, can we do better?
NaN
s maybe?The text was updated successfully, but these errors were encountered: