You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running the bulk indexer on the full LR dataset, full garbage collects are occurring with increasing frequency as the process progresses. They are now sufficiently frequent that I am concerned the indexer will fail sometime soon as the size of the dataset increases.
In production the indexer is run with 24G of heap space. There is not much, if any room, to give it more.
Need to explore whether there is a memory leak or whether a more scalable approach to bulk indexing is needed.
The text was updated successfully, but these errors were encountered:
When running the bulk indexer on the full LR dataset, full garbage collects are occurring with increasing frequency as the process progresses. They are now sufficiently frequent that I am concerned the indexer will fail sometime soon as the size of the dataset increases.
In production the indexer is run with 24G of heap space. There is not much, if any room, to give it more.
Need to explore whether there is a memory leak or whether a more scalable approach to bulk indexing is needed.
The text was updated successfully, but these errors were encountered: