You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there any reason not to have the limit be None [other than not having come across it]? Other than RAM could blow up - but at machines the size used here, pretty unlikely - given document size known too.
The text was updated successfully, but these errors were encountered:
Thanks for the report! There's no reason we can't adjust the limit up a bit, but I suspect this is a result of an out-of-distribution document not being handle very well by the segmentation algorithm, meaning it's trying to classify the full page rather than individual sections of it. We'll increase that limit next go-around, but I suspect that results won't be meaningful unless updates to the segmentation process improve the way we're breaking up the sections on the page.
Probably not an issue in a current journal based workflow.
However, in older stuff this can tend to happen
Is there any reason not to have the limit be None [other than not having come across it]? Other than RAM could blow up - but at machines the size used here, pretty unlikely - given document size known too.
The text was updated successfully, but these errors were encountered: