:: Re: [Libbitcoin] obworker uses up a…
Pàgina inicial
Delete this message
Reply to this message
Autor: Amir Taaki
Data:  
A: libbitcoin
Assumpte: Re: [Libbitcoin] obworker uses up all memory and crashes [WAS: Re: New releases for libbitcoin/Obelisk/sx]
Not really, and it doesn't seem like a good solution (more of a
workaround) now I'm thinking.

The problem (if this is it), is that the block writer is IO bound, and
blocks are being queued such that memory is exhausted before writes can
complete.

I'm not sure if it is the problem, but certainly should be something
along these lines if it's only caused by HDD.

Only 500 blocks are downloaded at a time and attempted to be stored.

On 07/01/14 21:23, mlmikael wrote:
> Wait - so - what happens in the bigger picture when the writer is
> throttled per your code below?
>
>
>
> Is it really graceful, all transactions will be handled duly and
> libbitcoin will autoresume the blockchain download automatically?
>
>
>
> Thanks
>
>
>
> On 2014-01-07 22:08, Amir Taaki wrote:
>
>> So if the buffer queue is too big, then don't queue writes?
>>
>> Would this be a good scheme:
>>
>> * Have a value in the service poller called:
>>
>>     size_t blocks_queued_count_ = 0;

>>
>> * Change the block storage code when a new block is received to:
>>
>>     if (blocks_queued_count_ > queue_capacity_)
>>     {
>>         log_debug(LOG_POLLER) << "Throttling writer.";
>>         return;
>>     }
>>     chain_.store(block, ...)
>>     ++blocks_queued_count_;

>>
>> * Change the callback in the store to decrement the counter:
>>
>>     --blocks_queued_count_;

>>
>> How does that sound?
>>
>> On 07/01/14 20:35, William Swanson wrote:
>>> On Tue, Jan 7, 2014 at 12:18 PM, Amir Taaki <genjix@???
>>> <mailto:genjix@riseup.net>> wrote:
>>>> Are you running an HDD or SSD? If it's an HDD and the write
>>>> throughput is low that blocks are not being indexed/written fast
>>>> enough, then the memory buffer is filling up as new blocks are
>>>> queued in the database. The only way to resolve that (if it's the
>>>> problem) is by a) making pure writes faster or b) indexing less data
>>>> (not a fan of this) [note: not pruning but just not indexing].
>>> There is a third option: throttle the networking component so the
>>> writer can keep up. You control the rate at which "getdata" messages
>>> are sent, so you can limit those to whatever the writer can handle. I
>>> think this is the real solution. Having faster writes is nice too,
>>> but it just makes the race condition less likely. -William
>>
>> _______________________________________________
>> Libbitcoin mailing list
>> Libbitcoin@??? <mailto:Libbitcoin@lists.dyne.org>
>> https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/libbitcoin
>
>
>
>
>
>
> _______________________________________________
> Libbitcoin mailing list
> Libbitcoin@???
> https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/libbitcoin
>