:: Re: [Libbitcoin] obworker uses up a…
Pàgina inicial
Delete this message
Reply to this message
Autor: Amir Taaki
Data:  
CC: libbitcoin
Assumpte: Re: [Libbitcoin] obworker uses up all memory and crashes [WAS: Re: New releases for libbitcoin/Obelisk/sx]
So if the buffer queue is too big, then don't queue writes?

Would this be a good scheme:

* Have a value in the service poller called:

    size_t blocks_queued_count_ = 0;


* Change the block storage code when a new block is received to:

    if (blocks_queued_count_ > queue_capacity_)
    {
        log_debug(LOG_POLLER) << "Throttling writer.";
        return;
    }
    chain_.store(block, ...)
    ++blocks_queued_count_;


* Change the callback in the store to decrement the counter:

    --blocks_queued_count_;


How does that sound?

On 07/01/14 20:35, William Swanson wrote:
> On Tue, Jan 7, 2014 at 12:18 PM, Amir Taaki <genjix@???> wrote:
>> Are you running an HDD or SSD? If it's an HDD and the write throughput
>> is low that blocks are not being indexed/written fast enough, then the
>> memory buffer is filling up as new blocks are queued in the database.
>>
>> The only way to resolve that (if it's the problem) is by a) making pure
>> writes faster or b) indexing less data (not a fan of this) [note: not
>> pruning but just not indexing].
>
> There is a third option: throttle the networking component so the
> writer can keep up. You control the rate at which "getdata" messages
> are sent, so you can limit those to whatever the writer can handle. I
> think this is the real solution. Having faster writes is nice too, but
> it just makes the race condition less likely.
>
> -William
>