Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use separate Tokio runtime for gossip persistence. #73

Merged

Conversation

arik-so
Copy link
Contributor

@arik-so arik-so commented Mar 12, 2024

No description provided.

@TheBlueMatt
Copy link
Contributor

LGTM, why is this draft?

@TheBlueMatt
Copy link
Contributor

CI says "Cannot drop a runtime in a context where blocking is not allowed. This happens when a runtime is dropped from within an asynchronous context."

@arik-so
Copy link
Contributor Author

arik-so commented Mar 13, 2024

Because of that CI issue that only happens in tests, though it runs fine. Just trying to get my local tests to fail the same way to then test my fix. Shouldn't take long.

@arik-so arik-so force-pushed the arik/2024-03-tokio-runtime-split branch from def546e to bf9d941 Compare March 13, 2024 23:38
@arik-so arik-so marked this pull request as ready for review March 13, 2024 23:41
let mut _db_schema = None;
#[cfg(test)]
{
_db_schema = Some(crate::tests::db_test_schema());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why cant this be in the #[test]?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean pass db_schema as a parameter to persist_gossip?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, apologies, I missed the copy. Can we instead do this transparently as a part of the connection object?

Copy link
Contributor

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise LGTM.

@@ -162,6 +166,7 @@ impl<L: Deref> GossipPersister<L> where L::Target: Logger {
let mut connections_set = connections_cache_ref.lock().await;
connections_set.push(client);
limiter_ref.add_permits(1);
drop(connections_set);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we do this right after we build the client? Otherwise we end up holding the lock for the duration of the insert, killing our parallelism.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I'll clean it up for real now. Thanks for the concept ack

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, minor followup first: I moved the connections_set creation into a scope that is dropped prior to the db insertion operations. Does that seem sufficient?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, looks good now.

@@ -118,6 +122,19 @@ impl<L: Deref> GossipPersister<L> where L::Target: Logger {
insert_limiter.acquire().await.unwrap().forget();

let limiter_ref = Arc::clone(&insert_limiter);
let client = {
let connections_cache_ref = Arc::clone(&connections_cache);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You probably don't need the Arc now :)

@@ -162,6 +166,7 @@ impl<L: Deref> GossipPersister<L> where L::Target: Logger {
let mut connections_set = connections_cache_ref.lock().await;
connections_set.push(client);
limiter_ref.add_permits(1);
drop(connections_set);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, looks good now.

@@ -162,6 +169,7 @@ impl<L: Deref> GossipPersister<L> where L::Target: Logger {
let mut connections_set = connections_cache_ref.lock().await;
connections_set.push(client);
limiter_ref.add_permits(1);
drop(connections_set);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is dropping a lock taken two lines up at the end of a scope, you can just drop it :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think about the femtoseconds, Matt.

@arik-so arik-so force-pushed the arik/2024-03-tokio-runtime-split branch from 2fab4ba to eac7646 Compare March 21, 2024 00:42
@@ -118,6 +121,18 @@ impl<L: Deref> GossipPersister<L> where L::Target: Logger {
insert_limiter.acquire().await.unwrap().forget();

let limiter_ref = Arc::clone(&insert_limiter);
let client = {
let connections_cache_ref = &connections_cache;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for this.

@arik-so arik-so force-pushed the arik/2024-03-tokio-runtime-split branch from 414ec15 to 560dabd Compare March 22, 2024 22:22
@arik-so arik-so force-pushed the arik/2024-03-tokio-runtime-split branch from 560dabd to 244ae25 Compare March 22, 2024 22:28
@arik-so arik-so force-pushed the arik/2024-03-tokio-runtime-split branch from 244ae25 to 418d776 Compare March 22, 2024 22:37
@TheBlueMatt TheBlueMatt merged commit 47cb2ca into lightningdevkit:main Mar 22, 2024
4 checks passed
@arik-so arik-so deleted the arik/2024-03-tokio-runtime-split branch March 22, 2024 22:46
@TheBlueMatt TheBlueMatt mentioned this pull request Mar 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants