Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would be fine with this. Wdyt @Zoey2936?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can do this, but it will use a lot of ram
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@szaimen I don't understand why you would rewrite this with the same value for initial and max heap size, I've always seen this options with Xms < Xmx.
I'm almost sure this is the source of the problem, when java cannot allocate more than initial (reserved) size, it seems elasticsearch cannot spawn new nodes for indexing.
I could try
-Xms256M -Xmx512M
but it's very long to re-index...I have a lot of files and groupfolders, but I don't think I need 1GB or 2GB of RAM to do this.
I don't know what is exactly the memory usage printed by occ, but I've never seen it exceed 200 or 300MB.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To avoid high memory usage even when not that much is needed, why not use something like
-Xms64m -Xmx1024M
instead ?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have another idea : may be I should close this, in favor of another issue for feature request :
a
ES_JAVA_OPTS
env that can be set in the compose file ?In my case the problem may also be related to groupfolders, which fulltextsearch does not seems understand really well and I wouldn't be surprised if files are indexed as many times as they are users. It's not clears from the logs if there's just an entry per user (file path) or if file is actually parsed every time.
But I don't think this high memory usage is normal in regard to the number of files I have to index here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm... 512m and 512m are the values recommended in https://github.com/R0Wi/elasticsearch-nextcloud-docker#how-to-use-this so not sure increasing the value is the way to go. However I would really like to not introduce a variable here. So maybe the fulltextsearch app needs to be improved instead in order to improve the handling of groupfolders?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As you want, but if I were you I wouldn't take anything for granted except if it comes from the official ElasticSearch docs.
But if this does not happen for users without groupfolders, I agree here isn't the place to make that change.
Yet I don't understand why want to avoid a variable, since this setting has to be set according to number of files to scan, you might see this exception pop in future issues !
At least I believe it should be added to the docs which file has to be modified (
containers.json
) in order to give it more RAM. This is not a problem if the container then re-run with Xmx512M, because it's only happening during a first run - or manualocc fulltextsearch:index
.The container is now using ~1.2GB, and the app is working as expected.
This java memory thing is a mess (I don't understand how it could go up to 3GB with a -Xmx of 1024M).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well I just gave a quick look at ES docs, they show the following example for memory settings in docker. So yeah I guess my first argument is broken ^^ they also use same Xmx and Xms here.
docker run -e ES_JAVA_OPTS="-Xms1g -Xmx1g" -e ENROLLMENT_TOKEN="<token>" --name es01 -p 9200:9200 --net elastic -it docker.elastic.co/elasticsearch/elasticsearch:8.9.2
But also :
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
running java without xms/xmx will make java to eat basically all ram
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I'm confused by this statement, if we run in docker, isn't it for ES java process to be alone in its env ? So I don't see how ES_JAVA_OPTS could override other JVM options.. how many JVM processes are there in this container
Edit: I read it wrong. So this variable is erasing every other options for this JVM... it seems a bad idea, unless this variable is empty in the first place ?
Edit 2: I checked, it's exactly the same than what is in the current
main
.