-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Potential Use Cases #45
Comments
I'm working on it. |
Interesting. It means then that we have to support the hybrid MPI + OpenMP + Pthreads model. I'm not sure that the current OpenMP-based implementation in QV supports this. |
Fine-grained MPI + OpenMP + Pthreads support would be fantastic. I'm aware of other use cases that would benefit from this capability, too. |
One thing I'm thinking about it the ability to share a contex and/or a scope between OpenMP threads and Pthreads. |
I think we need functions in the like of |
Could you please elaborate on this? |
Also, something to consider: would splitting up OpenMP and Pthread support help in any way? |
Yes, I can: as we already discussed, MPI and OpenMP feature some kind of runtime system that can be relied upon and queried to. This is not the case with pthread and I will need to introduce some shared memory space that can contain some information the Pthread implementation will need. In the case of multi-paradigms programs (eg. MPI + OpenMP + Pthreads) then this shared space should be accessible by all "paradigms". My thinking is that having three separate supports/implemenations that are not aware of the others is not going to work. |
Thus, having a generic |
In that case, could one implement an internal abstraction that provides such mechanisms for Pthreads? |
Yes, that is what I'm planning to do. But this internal abstraction would have to be shared eventually, wouldn't it? |
Could we accomplish the same goal by implementing the missing machinery internally to QV? |
Maybe. But how do you detect hybrid cases? And enable the support in these cases? |
Shared across tasks, yes; but I'm not convinced that we have to expose those details to the user. |
And MPI + OpenMP is different from OpenMP + Pthreads IMHO. |
I agree but I'm not sure that we can do something completely transparent. But I advocate for transparency in this matter so I think we're in agreement here. |
Let me come up with an initial crappy design for Pthreads that works and then we'll iterate from it. |
Would some machinery we come up regarding #35 do the trick? Recall that the RMI should (but currently doesn't) keep track of all the groups and their respective tasks for us. Maybe we can use the RMI as the ultimate keeper of such information. This would obviate the need for an explicit init and finalize. |
My gut feeling is that it will (partially at least).
OK, we have to discuss a bit then because it's something that I didn't completly catched previously. Which groups are you refering to? Because we know the word can be confusing. The same groups that are included in group tabs for each structure? |
Yes, let's schedule a call so we can talk this over. This is an important decision. I have some ideas about the single-process case: it should be pretty straightforward to implement (famous last words). |
Agreed.
Are you trying to impersonate me? |
Here is another potential use case that's worth considering: internal use in mpibind. This might help demonstrate QV's generality in another piece of system software. |
Greetings, @samuelkgutierrez Not sure I follow. Could you elaborate a bit more? |
I was just thinking that maybe we can implement core pieces of mpibind's API using QV underneath the covers. This could serve as another demonstration of QV's generality in the system software space if we can successfully use it for common mpibind tasks. |
It makes sense, @samuelkgutierrez , thanks for clarifying! |
Courtesy of @adammoody.
As a concrete use case, we might have a situation like:
Ideally, those background threads would run on different cores than the main application thread to avoid contention. However, they could run on the same core as the main app thread if there are no spare cores available. The background threads could run on the same core together, since they are likely not CPU intensive.
Does the Quo Vadis interface provide a way to specify a situation like that?
We can use hints within
qv_scope_create
to accomodate this.What we need to implement:
QV_SCOPE_SYSTEM
qv_pthread_create
qv_scope_nobjs_avail
qv_scope_create
can take. The INCLUSIVE (or shared) hint means that other workers may be running on the same resource (the opposite of exclusive). By default we should place threads using a BFS strategy and then fill up the cores if multiple hardware threads are available.I guess both SCR and MPI would need to make QV calls?
Yes, the more components that use QV, the better placement and coordination.
The text was updated successfully, but these errors were encountered: