On Tue, Jan 07, 2014 at 08:18:38PM +0000, Amir Taaki wrote:
> Are you running an HDD or SSD? If it's an HDD and the write throughput
> is low that blocks are not being indexed/written fast enough, then the
> memory buffer is filling up as new blocks are queued in the database.
HDD.
The issue might be triggered by that, but could this be a memory
leak?.. Not necessarily in this portion of the code.
I've been running obworker through gdb and valgrind today, but no
meaningful results so far. I'll see into how to get runtime stats on
memory allocation/freeing some day soon.
In the meantime, the issue can hang a bit. If no one is able to
reproduce it, there's no reason to file a bug report.
> The only way to resolve that (if it's the problem) is by a) making pure
> writes faster or b) indexing less data (not a fan of this) [note: not
> pruning but just not indexing].
>
> ------------
>
> Can you update from the Git repo, and enable this new switch in your
> config file:
>
> https://github.com/spesmilo/obelisk/blob/master/src/worker/worker.cfg#L32
>
> log_requests = true
>
> Clear your log files, then run the daemon.
>
> $ grep request debug.log
>
> will give you some idea of the requests being run on the daemon.
OK, checked. The only requests are mine, 3 total for this whole
debugging session. So not related.