You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, when a hazard pointer is created using hazard_ptr(const void*), a thread_local local static variable is used to track where that thread's last hazard pointer was on the global hazard linked list, in order to speed up the process of aquiring the hazard while maintaining that thread's ownership of the node in the linked list. When a thread acquires its first hazard pointer, it places a hazard node on the global hazard linked list, and keeps the location of that for future reference.
When the thread ceases its execution and is joined, however, it does not remove its own hazard nodes from the linked list, nor is there a mechanism in place to remove them.
This means, that if many threads are created over the course of a running application, the hazard linked list can grow in unbounded size, consuming memory, and increasing the time it takes to traverse and discover all hazards during the reclamation cycle, even though most of the list is populated with nullptr hazards.
possible solutions
On thread exit, acquire a lock over the hazard pointer linked list. Remove the thread's owned nodes (either stored in a list thread-locally or in the hazard pointer itself) and then release the lock. Obviously this is blocking, which sort-of undermines the purpose of lock-freedom, but since this only will occur on thread exit, it's a reasonable idea.
On thread exit, remove the nodes in the hazard list in a lockfree way. Unfortunately, this isn't possible without some deferred reclamation system since other readers of the list might be accessing our nodes while we attempt to delete them.
Defer reclamation to the user, with a user function to lock and then delete the nodes which were produced by since-exited threads. This would require thread-identifying information stored in the nodes and a way to check if that thread is running as a function of that information.
The text was updated successfully, but these errors were encountered:
problem
Currently, when a hazard pointer is created using
hazard_ptr(const void*)
, a thread_local local static variable is used to track where that thread's last hazard pointer was on the global hazard linked list, in order to speed up the process of aquiring the hazard while maintaining that thread's ownership of the node in the linked list. When a thread acquires its first hazard pointer, it places a hazard node on the global hazard linked list, and keeps the location of that for future reference.When the thread ceases its execution and is joined, however, it does not remove its own hazard nodes from the linked list, nor is there a mechanism in place to remove them.
This means, that if many threads are created over the course of a running application, the hazard linked list can grow in unbounded size, consuming memory, and increasing the time it takes to traverse and discover all hazards during the reclamation cycle, even though most of the list is populated with
nullptr
hazards.possible solutions
On thread exit, acquire a lock over the hazard pointer linked list. Remove the thread's owned nodes (either stored in a list thread-locally or in the hazard pointer itself) and then release the lock. Obviously this is blocking, which sort-of undermines the purpose of lock-freedom, but since this only will occur on thread exit, it's a reasonable idea.
On thread exit, remove the nodes in the hazard list in a lockfree way. Unfortunately, this isn't possible without some deferred reclamation system since other readers of the list might be accessing our nodes while we attempt to delete them.
Defer reclamation to the user, with a user function to lock and then delete the nodes which were produced by since-exited threads. This would require thread-identifying information stored in the nodes and a way to check if that thread is running as a function of that information.
The text was updated successfully, but these errors were encountered: