-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix logging of closure limits #34
Conversation
b4c2030
to
6d8ed9c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some queries here quickly - I'm losing track of the switches sorry
R/closure.R
Outdated
state, | ||
c(N_AGE_GROUPS, N_MODEL_COMPARTMENTS, N_ECON_STRATA, N_VACCINE_STRATA) | ||
) | ||
cm <- parameters[["contact_matrix"]] %*% diag(parameters[["demography"]]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If neither of these inputs never change is it better perhaps to compute in parameter set up once rather than adding this matrix multiplication to every root check?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added to parameters
.
R/closure.R
Outdated
c(N_AGE_GROUPS, N_MODEL_COMPARTMENTS, N_ECON_STRATA, N_VACCINE_STRATA) | ||
) | ||
cm <- parameters[["contact_matrix"]] %*% diag(parameters[["demography"]]) | ||
rt <- r_eff(parameters[["r0"]], state, cm) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might want to get a "r_greater_than_one" check function written at some point that could be much faster than doing this accurately - would be good to get some profiles done at some point
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would that be a simple wrapper around this operation, or is there something more complex I'm missing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something more complex - we don't care about the eigenvalues very much - we just care about the sign of the leading one. (Much) faster algorithms exist for finding the leading eigenvalue and I think we can generalise that (or find an existing generalisation) if we just care about the sign. But no point doing any of that unless this is actually a timesink, hence the suggestion of profiling first
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have thoughts on this that open up another can of worms related to the
I've worked out a relatively simple mathematical expression for the eigenvalue here: https://hackmd.io/xfzSdlY-SnGblvP8aOA7Fw, as a precursor to calculating
|
||
# NOTE: trigger response and log closure start time only if | ||
# epidemic is growing | ||
if (rt >= 1.0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we also be checking somewhere here that switch
is already FALSE? Or is it sufficient that hosp_switch
is FALSE?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that switch
and hosp_switch
are equivalent flags in stage 01 (1:response_time
).
The sequence of events is:
- Hospitalisations cross the capacity threshold, but
hosp_switch
isFALSE
; hosp_switch
is set toTRUE
activating excess mortality;- IFF the epidemic is growing, the response is triggered and
switch
is turned on activating a response; time
is logged asclosure_start_time
;
I have split up the two switches in make_rt_end_event()
as they are independent once the response is triggered by the default response_time
(t = 30).
The issue with the switches is that they're split over multiple model stages and events, rather than having an integrated event function that modifies each switch from within the ODE RHS. This isn't going to get any easier to keep track of as I need to add a vaccination switch which I omitted in #33. |
Just following up on the eigenvalue calculation comment with this reprex benchmarking the power iteration method for eigenvalue approximation against R's I can look into other algorithms, or we could work out the maths with EPPI as I'd be more confident in the model if both use-cases of the eigenvalue ( # power iteration method for leading eigenvalue
power_iteration <- function(A, tol = 1e-6, max_iter = 1000) {
# Initialize a random vector
n <- nrow(A)
v <- rnorm(n)
v <- v / sqrt(sum(v^2)) # Normalize the vector
lambda_old <- 0 # Initial value for the leading eigenvalue
for (i in 1:max_iter) {
# Multiply the matrix by the vector
v_new <- A %*% v
# Normalize the new vector
v_new <- v_new / sqrt(sum(v_new^2))
# Estimate the leading eigenvalue (Rayleigh quotient)
lambda_new <- as.numeric(t(v_new) %*% A %*% v_new)
# Check for convergence
if (abs(lambda_new - lambda_old) < tol) {
break
}
# Update the vector and eigenvalue for the next iteration
v <- v_new
lambda_old <- lambda_new
}
sign(lambda_new)
}
# R `eigen()` method for eigenvalues
f = function(A) {
eigv = max(Re(eigen(A)$values))
sign(eigv)
}
# small matrix similar to contact matrix %*% diag(p_susc)
A = matrix(
runif(16, 0, 9999), 4, 4
)
# check for 4x4 matrix
microbenchmark::microbenchmark(
power_iteration = power_iteration(A),
trad = f(A),
check = "identical"
)
#> Unit: microseconds
#> expr min lq mean median uq max neval
#> power_iteration 23.410 27.9335 31.84206 29.7945 35.4180 64.752 100
#> trad 44.339 48.2375 66.72733 50.8470 60.3515 1322.670 100
# check for 100x100 matrix
A = matrix(
runif(10000, 0, 9999), 100, 100
)
# check for 4x4 matrix
microbenchmark::microbenchmark(
power_iteration = power_iteration(A),
trad = f(A),
check = "identical"
)
#> Unit: microseconds
#> expr min lq mean median uq max neval
#> power_iteration 111.798 134.937 145.8999 140.80 154.838 225.568 100
#> trad 3759.610 3883.711 3955.9885 3923.24 3968.626 5882.294 100 |
Let's also find out what fraction of the total time it takes before worrying - in the case that prompted writing eigen1 it was taking something like 70% of the total time to simulate! But these were also quite large matrices (on the order of hundreds of rows/columns) where the gains were large. In the case here, we should be able to adapt the power method to work out if we're heading above or below 1, which would be faster again. But if you have an actual math solution, which I'd believe is findable with a 4x4, that will surely be the fastest |
This PR fixes an issue where closures were not ended in model runs where the closure began after the epidemic had stopped growing. This mostly affected edge cases of countries with large spare hospital capacity, and relatively late
response_time
s. In such cases the closure end time was assigned as the simulation end time, inflating costs related to closures. The fix:Prevents closures from being activated by the
hospital_capacity
trigger if the epidemic is not growing, even if the hospital capacity threshold is crossed;Prevents closures from being activated by the$R_t < 1.0$ ).
response_time
trigger if the epidemic is not growing between stage 01 and stage 02. Closures are manually turned off if the epidemic is not growing (Tests for different response times check that the model behaves as expected.
Miscellaneous changes
array2DF()
.This PR builds off #33 and needs to be rebased on
main
before merging.