-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support ComponentArrays #701
Comments
I was looking through the issues, and I came across this: #646. Not that my issue should hold up that one, but I think they're at least somewhat linked because the underlying question is about object storage and typing within the package. I'm guessing implementing either will run into similar problems related to properly initializing caches with the right types. I'll add that I think differentiability of the solvers would be an amazing feature. :) |
Hi @jmulcahy! |
Here's a complete example. I swapped using ComponentArrays, LinearMaps, Krylov
function cumsum2D(data)
Σrow = cumsum(data.arr, dims=1)
Σcol = cumsum(data.arr, dims=2)
ComponentArray(Σrow=Σrow, Σcol=Σcol)
end
function cumsum2Dᴴ(sums)
arr = reverse(cumsum(reverse(sums.Σrow, dims=1), dims=1), dims=1)
arr .+= reverse(cumsum(reverse(sums.Σcol, dims=2), dims=2), dims=2)
ComponentArray(arr=arr)
end
data = ComponentArray(arr=randn(8, 8))
out = cumsum2D(data)
A = LinearMap(cumsum2D, cumsum2Dᴴ, length(out), length(data))
b = A * data
lsmr(A, b) Stacktrace:
I believe BLAS should still work with ComponentArrays since they're linear memory buffers. I believe that's one of the design goals of the package. My current workaround is to put only "normal" |
Example with all the reshaping inserted (copies not avoided for brevity): using LinearMaps, Krylov
function cumsum2D_reshaping(arr)
reshaped = reshape(arr, 8, 8)
sums = similar(arr, 8, 8, 2)
sums[:, :, 1] = cumsum(reshaped, dims=1)
sums[:, :, 2] = cumsum(reshaped, dims=2)
sums[:]
end
function cumsum2Dᴴ_reshaping(sums)
reshaped = reshape(sums, 8, 8, 2)
arr = reverse(cumsum(reverse(reshaped[:, :, 1], dims=1), dims=1), dims=1)
arr .+= reverse(cumsum(reverse(reshaped[:, :, 2], dims=2), dims=2), dims=2)
arr[:]
end
data = randn(8, 8)
out = cumsum2D_reshaping(data[:])
A = LinearMap(cumsum2D_reshaping, cumsum2Dᴴ_reshaping, length(out), length(data))
b = A * data[:]
lsmr(A, b) |
Thanks for the examples, I'm currently traveling and I forgot my laptop charger so I can't test them until Thursday...
|
Thanks for taking a look. That code hits this error on the last line:
Looking at the |
I would like to find a solution without multiple storage types for the vectors in the KrylovSolvers. Your issue is similar to #605. |
That is similar. Along those lines, I found this: https://github.com/JeffFessler/LinearMapsAA.jl which creates linear mappings that return multi-dimensional arrays. I haven't looked closely, but I imagine it doesn't work with any of the standard solvers. I have a workaround for this, so no rush. I thought I'd file an issue since it seems like this ergonomics problem with structured data and multi-dimensional arrays might be common. In doing some thinking and prototyping, I thought ComponentArray approach that it appears SciML has adopted broadly looked elegant. Thanks again for taking a look. |
@jmulcahy |
Maybe there is some way to utilize dispatch to instead hit data = ComponentArray(x=randn(16, 16), y=randn(32, 32), z=1)
similar(data) Under the hood, it's making a |
Ok but for solvers specialized for rectangular matrices such as LSMR, you need to allocate vectors of length |
I need it because some of my problems are indefinite and rectangular. The I think one way to do this would involve adjusting the type signature of mutable struct LsmrSolver{T,FC,M,N} <: KrylovSolver{T,FC,M,N}
m :: Int
n :: Int
x :: N
Nv :: N
Aᴴu :: N
h :: N
hbar :: N
Mu :: M
Av :: M
u :: M
v :: N
err_vec :: Vector{T}
stats :: LsmrStats{T}
end So |
Hi @jmulcahy! Sorry for the delay... using ComponentArrays
x = ComponentArray(a=5.0, b=[(a=20.0, b=0.0), (a=33.0, b=0.0)])
y = ComponentArray(a=[(a=3.0, b=1.0, c=7.0)], b=20.0)
n = length(x)
m = length(y)
window = 5
T = FC = Float64
S = ComponentVector{Float64}
LsmrSolver{T,FC,S}(
m,
n,
similar(x),
similar(x),
similar(x),
similar(x),
similar(x),
similar(y),
similar(y),
similar(y),
similar(x),
zeros(T, window),
LsmrStats(0, false, false, T[], T[], zero(T), zero(T), zero(T), zero(T), zero(T), "unknown")
) The best solution is to definite a new It means that we can only use the in-place variant: using ComponentArrays, LinearMaps, Krylov
function cumsum2D(data)
Σrow = cumsum(data.arr, dims=1)
Σcol = cumsum(data.arr, dims=2)
ComponentArray(Σrow=Σrow, Σcol=Σcol)
end
function cumsum2Dᴴ(sums)
arr = reverse(cumsum(reverse(sums.Σrow, dims=1), dims=1), dims=1)
arr .+= reverse(cumsum(reverse(sums.Σcol, dims=2), dims=2), dims=2)
ComponentArray(arr=arr)
end
data = ComponentArray(arr=randn(8, 8))
out = cumsum2D(data)
A = LinearMap(cumsum2D, cumsum2Dᴴ, length(out), length(data))
b = A * data
m, n = size(A)
window = 5
T = FC = eltype(b)
S = ComponentVector{T}
solver = LsmrSolver{T,FC,S}(
m,
n,
similar(data),
similar(data),
similar(data),
similar(data),
similar(data),
similar(out),
similar(out),
similar(out),
similar(data),
zeros(T, window),
LsmrStats(0, false, false, T[], T[], zero(T), zero(T), zero(T), zero(T), zero(T), "unknown")
)
solver = lsmr!(solver, A, b) ┌──────────────────┬────────────────────────┬──────────────────┐
│ LsmrSolver│ nrows: 128│ ncols: 64│
├──────────────────┼────────────────────────┼──────────────────┤
│Precision: Float64│ Architecture: GPU│Storage: 218 bytes│
├──────────────────┼────────────────────────┼──────────────────┤
│ Attribute│ Type│ Size│
├──────────────────┼────────────────────────┼──────────────────┤
│ m│ Int64│ 8 bytes│
│ n│ Int64│ 8 bytes│
│ x│ComponentVector{Float64}│ 8 bytes│
│ Nv│ComponentVector{Float64}│ 8 bytes│
│ Aᴴu│ComponentVector{Float64}│ 8 bytes│
│ h│ComponentVector{Float64}│ 8 bytes│
│ hbar│ComponentVector{Float64}│ 8 bytes│
│ Mu│ComponentVector{Float64}│ 8 bytes│
│ Av│ComponentVector{Float64}│ 8 bytes│
│ u│ComponentVector{Float64}│ 8 bytes│
│ v│ComponentVector{Float64}│ 8 bytes│
│ err_vec│ Vector{Float64}│ 40 bytes│
│ stats│ LsmrStats{Float64}│ 90 bytes│
└──────────────────┴────────────────────────┴──────────────────┘
LsmrStats
niter: 30
solved: true
inconsistent: false
residuals: []
Aresiduals: []
residual: 1.8345314994833746e-6
Aresidual: 1.718353653426171e-6
κ₂(A): 5.973733856419315
‖A‖F: 22.260207728748714
xNorm: 8.39488702040155
status: found approximate zero-residual solution julia> norm(b - A * solver.x)
1.8345314996956182e-6
|
import Base.sizeof
import Krylov.ksizeof
function sizeof(v :: ComponentVector)
type = typeof(v)
nfields = fieldcount(type)
storage = 0
for i = 1:nfields
field_i = getfield(v, i)
size_i = Krylov.ksizeof(field_i)
storage += size_i
end
return storage
end solver
┌──────────────────┬────────────────────────┬──────────────────┐
│ LsmrSolver│ nrows: 128│ ncols: 64│
├──────────────────┼────────────────────────┼──────────────────┤
│Precision: Float64│ Architecture: GPU│Storage: 6.143 KiB│
├──────────────────┼────────────────────────┼──────────────────┤
│ Attribute│ Type│ Size│
├──────────────────┼────────────────────────┼──────────────────┤
│ m│ Int64│ 8 bytes│
│ n│ Int64│ 8 bytes│
│ x│ComponentVector{Float64}│ 512 bytes│
│ Nv│ComponentVector{Float64}│ 512 bytes│
│ Aᴴu│ComponentVector{Float64}│ 512 bytes│
│ h│ComponentVector{Float64}│ 512 bytes│
│ hbar│ComponentVector{Float64}│ 512 bytes│
│ Mu│ComponentVector{Float64}│ 1024 bytes│
│ Av│ComponentVector{Float64}│ 1024 bytes│
│ u│ComponentVector{Float64}│ 1024 bytes│
│ v│ComponentVector{Float64}│ 512 bytes│
│ err_vec│ Vector{Float64}│ 40 bytes│
│ stats│ LsmrStats{Float64}│ 90 bytes│
└──────────────────┴────────────────────────┴──────────────────┘
|
Ahh, very cool. I thought the information about where elements were in the |
Sorry about the delay, but I did try this and it worked perfectly. Thanks for your help on this. |
Great! After #706, the updated constructors of the KrylovSolvers should work with ComponentArrays. |
It would be nice if Krylov.jl supported ComponentArrays.jl (https://github.com/jonniedie/ComponentArrays.jl). ComponentArrays let you create vectors that for the purposes of linear algebra are 1D but also have a structured overlay. For example:
It's more natural to write
grad2D
in terms of the 2D array rather than some flattened form. It's also convenient to access∇row
and∇col
viaout.∇row
andout.∇col
rather than raw indexing into the flat array. If you plug something like this into Krylov.jl it breaks somewhere around here: https://github.com/JuliaSmoothOptimizers/Krylov.jl/blob/main/src/krylov_solvers.jl#L1342. It looks like you'd need to expand the type specialization for every solver to at least support different types forAᴴu
andAv
. I'm not sure what other stuff would break downstream.The text was updated successfully, but these errors were encountered: