Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update render queue limits #152

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions includes/render_config.h
Original file line number Diff line number Diff line change
Expand Up @@ -70,17 +70,17 @@
//Legacy - not needed on new installs
//#undef METATILEFALLBACK

// Metatiles are much larger in size so we don't need big queues to handle large areas
// Typical osm.org render server can render 7 metatiles per second, and has average load of 4.5 metatiles per second.
// Queue depth of (7 - 4.5) * 86400 = 216000 moves the load from the busy part of the day to the night time
#define DIRTY_LIMIT (216000)
#define HASHIDX_SIZE (477856)

#ifdef METATILE
#define QUEUE_MAX (64)
#define REQ_LIMIT (32)
#define DIRTY_LIMIT (1000)

#else
#define QUEUE_MAX (1024)
#define REQ_LIMIT (512)
#define DIRTY_LIMIT (10000)
#define HASHIDX_SIZE 22123
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is HASHIDX_SIZE and why are you removing it?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line is no-op.
It is used here:

return key % queue->hashidxSize;

and redefined here:
#define HASHIDX_SIZE 2213

(see includes order in
#include "render_config.h"
)

Copy link
Member

@tomhughes tomhughes Nov 15, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So does the hash table have 2213 buckets? or 22123 buckets? Not that either is anywhere big enough once you have a two million entry queue - that will give an average chain length of either 100 or 1000 either of which seems way too large.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like somebody wants to increase HASHIDX_SIZE from default 2213 to 22123 when METATILE is not defined. But it's so ugly and include order dependent that should be fixed.

Copy link

@vng vng Nov 15, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@apmon According to the code history, here is the initial commit to use different values for HASHIDX_SIZE:
da19811

But later after the refactoring, it was broken, and now HASHIDX_SIZE is always (not obvious) is 2213:
707e125

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tomhughes I've updated numbers to match ratio.
We're not using non-metatile mode, so it's not 2M, but 200k records.

@vng I've updated request_queue.h to not rewrite the value.

#endif

// Penalty for client making an invalid request (in seconds)
Expand Down
4 changes: 0 additions & 4 deletions includes/request_queue.h
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,6 @@
extern "C" {
#endif

#define HASHIDX_SIZE 2213

typedef struct {
long noDirtyRender;
long noReqRender;
Expand Down Expand Up @@ -79,5 +77,3 @@ void request_queue_copy_stats(struct request_queue * queue, stats_struct * stats
}
#endif
#endif