Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FIX: use 1GB for FTS max JVM memory usage #3355

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion php/containers.json
Original file line number Diff line number Diff line change
Expand Up @@ -612,7 +612,7 @@
"internal_port": "9200",
"environment": [
"TZ=%TIMEZONE%",
"ES_JAVA_OPTS=-Xms512M -Xmx512M",
"ES_JAVA_OPTS=-Xms512M -Xmx1024M",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would be fine with this. Wdyt @Zoey2936?

Suggested change
"ES_JAVA_OPTS=-Xms512M -Xmx1024M",
"ES_JAVA_OPTS=-Xms1024M -Xmx1024M",

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can do this, but it will use a lot of ram

Copy link
Author

@vidlb vidlb Sep 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@szaimen I don't understand why you would rewrite this with the same value for initial and max heap size, I've always seen this options with Xms < Xmx.
I'm almost sure this is the source of the problem, when java cannot allocate more than initial (reserved) size, it seems elasticsearch cannot spawn new nodes for indexing.

I could try -Xms256M -Xmx512M but it's very long to re-index...
I have a lot of files and groupfolders, but I don't think I need 1GB or 2GB of RAM to do this.
I don't know what is exactly the memory usage printed by occ, but I've never seen it exceed 200 or 300MB.

Copy link
Author

@vidlb vidlb Sep 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can do this, but it will use a lot of ram

To avoid high memory usage even when not that much is needed, why not use something like -Xms64m -Xmx1024M instead ?

Copy link
Author

@vidlb vidlb Sep 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have another idea : may be I should close this, in favor of another issue for feature request :
a ES_JAVA_OPTS env that can be set in the compose file ?

In my case the problem may also be related to groupfolders, which fulltextsearch does not seems understand really well and I wouldn't be surprised if files are indexed as many times as they are users. It's not clears from the logs if there's just an entry per user (file path) or if file is actually parsed every time.
But I don't think this high memory usage is normal in regard to the number of files I have to index here.

Copy link
Collaborator

@szaimen szaimen Sep 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm... 512m and 512m are the values recommended in https://github.com/R0Wi/elasticsearch-nextcloud-docker#how-to-use-this so not sure increasing the value is the way to go. However I would really like to not introduce a variable here. So maybe the fulltextsearch app needs to be improved instead in order to improve the handling of groupfolders?

Copy link
Author

@vidlb vidlb Sep 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As you want, but if I were you I wouldn't take anything for granted except if it comes from the official ElasticSearch docs.

But if this does not happen for users without groupfolders, I agree here isn't the place to make that change.

Yet I don't understand why want to avoid a variable, since this setting has to be set according to number of files to scan, you might see this exception pop in future issues !

At least I believe it should be added to the docs which file has to be modified (containers.json) in order to give it more RAM. This is not a problem if the container then re-run with Xmx512M, because it's only happening during a first run - or manual occ fulltextsearch:index.

The container is now using ~1.2GB, and the app is working as expected.
This java memory thing is a mess (I don't understand how it could go up to 3GB with a -Xmx of 1024M).

Copy link
Author

@vidlb vidlb Sep 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well I just gave a quick look at ES docs, they show the following example for memory settings in docker. So yeah I guess my first argument is broken ^^ they also use same Xmx and Xms here.

docker run -e ES_JAVA_OPTS="-Xms1g -Xmx1g" -e ENROLLMENT_TOKEN="<token>" --name es01 -p 9200:9200 --net elastic -it docker.elastic.co/elasticsearch/elasticsearch:8.9.2

But also :

The ES_JAVA_OPTS variable overrides all other JVM options. We do not recommend using ES_JAVA_OPTS in production.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

running java without xms/xmx will make java to eat basically all ram

Copy link
Author

@vidlb vidlb Sep 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I'm confused by this statement, if we run in docker, isn't it for ES java process to be alone in its env ? So I don't see how ES_JAVA_OPTS could override other JVM options.. how many JVM processes are there in this container

Edit: I read it wrong. So this variable is erasing every other options for this JVM... it seems a bad idea, unless this variable is empty in the first place ?
Edit 2: I checked, it's exactly the same than what is in the current main.

"bootstrap.memory_lock=true",
"cluster.name=nextcloud-aio",
"discovery.type=single-node",
Expand Down