You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So while we'd previously fixed hang issues, there's still one remaining: if the DB backs up and we start blocking in the peer message handler, that's fine, except that its holding the peer read lock in LDK's peer_handler. Because that's a sync lock, any future calls into LDK's peer_handler which require the same peer's lock or a total peer write lock will immediately block until the DB catches up. That's all fine, unless there's enough work going on in the peer_handler to block enough tokio tasks that we no longer make progress on the DB, which can happen if there's lots of connections churning or not very many threads.
Not sure how to solve this, it may ultimately need a may-block-on-async flag in lightning-net-tokio, which would then make lightning-net-tokio much less efficient by calling block_in_place, but would allow this type of application.
The text was updated successfully, but these errors were encountered:
So while we'd previously fixed hang issues, there's still one remaining: if the DB backs up and we start blocking in the peer message handler, that's fine, except that its holding the peer read lock in LDK's
peer_handler
. Because that's a sync lock, any future calls into LDK'speer_handler
which require the same peer's lock or a total peer write lock will immediately block until the DB catches up. That's all fine, unless there's enough work going on in thepeer_handler
to block enough tokio tasks that we no longer make progress on the DB, which can happen if there's lots of connections churning or not very many threads.Not sure how to solve this, it may ultimately need a may-block-on-async flag in
lightning-net-tokio
, which would then makelightning-net-tokio
much less efficient by callingblock_in_place
, but would allow this type of application.The text was updated successfully, but these errors were encountered: