-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory for PepQuery2 #30
Comments
I increased the memory for the erroring job to 16G and it finished. |
Unfortunately the tool seems not to be satisfied and condor complains it tries to exceed the 16G:
The |
Looks like it supports a |
yes by default it uses all cores available to it, but it would be cleaner to use it, I guess. |
I got this from Galaxy, probably, because I run je job manually and it was still watched by the Galaxy Handlers: Job Metrics
cgroup
CPU Time | 2 hours and 54 minutes
-- | --
Failed to allocate memory count | 0E-7
Memory limit on cgroup (MEM) | 48.0 GB
Max memory usage (MEM) | 17.5 GB
Memory limit on cgroup (MEM+SWP) | 8.0 EB
Max memory usage (MEM+SWP) | 17.5 GB
OOM Control enabled | No
Was OOM Killer active? | No
Memory softlimit on cgroup | 0 bytes
...
Destination Parameters
Runner | condor
-- | --
Runner Job ID | 44667975
Handler | handler_sn06_3
+Group | ""
accounting_group_user | 55103
description | pepquery2
docker_memory | 16G
metadata_strategy | extended
request_cpus | 1
request_memory | 16G |
The job is stopped by condor for exceeding its memory:
Here is a PR to increase it
|
Not 100% sure where to place this issue, but I thought it might be interesting for all users of the shared database. Otherwise I can of course move it to EU.
I am currently debugging an error the pepquery2 tool. The job errored because the JVM run out of memory.
When I tried to run the job locally I had to stop at 14G because my laptop (16G) started to lag.
I noticed that:
I would like to change that, but I am not sure which values I should consider. In their documentation I found a recommendation for
8 GB of memory and 4 CPUs
which is too little for at least the job I am looking at. When I tried to usegxadmin query tool-memory-per-inputs
I found:While
gxadmin report job-info
returned the following:I am now trying to figure out how to implement a rule here and if we have to change something in the wrapper because of the CPU usage. Since I never used the tool myself I would be happy about any hints from people who have some experience with it.
The text was updated successfully, but these errors were encountered: