:: [DNG] Memory management strategies…
Página superior
Eliminar este mensaje
Responder a este mensaje
Autor: Rainer Weikusat
Fecha:  
A: dng
Asunto: [DNG] Memory management strategies.
Rainer Weikusat <rainerweikusat@???> writes:
> Didier Kryn <kryn@???> writes:
>> Le 01/02/2016 22:38, Rainer Weikusat a écrit :
>>> Didier Kryn <kryn@???> writes:
>>>> Le 01/02/2016 17:52, Rainer Weikusat a écrit :
>>>>> there's a known upper bound for the maximum number of objects which will
>>>>> be needed
>>>>      Some applications need to asynchronously create and destroy large
>>>> numbers of objects while the total number of objects at any given time
>>>> remains bounded. Creating them in one function or in one thread while
>>>> deleting them in another can be a sensible way to organize the program
>>>> if allocation/deallocation is efficient.
>>> Aha ... and what is this now supposed to communicate? It looks like a
>>> couple of unrelated statements to me which also don't seem to have any
>>> relation to the idea to use an array as 'backing store' for memory
>>> allocation, as opposed to, say, something which allocates page (frames)
>>> via OS calls, eg, mmap, and divides these as required or even just a
>>> 'large' memory block acquired via malloc.

>
> [...]
>
>> a Data aquisition program, where a "producer" task reads blocks of
>> data from digitizers and passes them to a "consumer" task which sends
>> them over the network.
>>
>>     The producer allocates the blocks of data inside an mmapped kernel
>> buffer and the consumer frees them after use.

>
> [...]
>
>> The kernel buffer is necessary because the digitizer's data is
>> transfered by DMA
>
> [...]
>
>> There's no point in allocating an unlimited memory size, because if
>> the producer runs faster than the consumer it's quickly going to
>> request all system memory.
>
> Well, surely, if there's a particularly boring/ trivial application
> where avoiding latency is not a primary concern[*] and flow-control is
> necessary for ensuring that the producer doesn't overrun the consumer,
> one possibility to implement that (likely dictated by the way the DMA
> engine works) is to use a ring buffer to store 'data packets' (of some
> definition).


Just to sketch an alternate design: One of the applications I'm
responsible for is a logging program supposed to accept 'log events'
from a number of different sources (packet filter, content-filter, IKE
daemon etc), format these as text and then forward them to an abitrary
number of 'log event sinks' (udp/ tcp syslog, database etc). Log events
must not be reordered. The application uses two threads. Initially, one
listens for events and the other waits for being allowed to enter the
listening loop. If the listening thread receives something, it formats
that and distributes it to the sinks. Meanwhile, the other thread
listens for events. If a new event is received before the working thread
is done handling the last one, an 'event buffer' is put onto a queue and
the working thread will keep processing events from the queue until it
is empty. The maximum size of the queue is limited because it's
physically impossible to keep accepting events faster than they can be
sent off.

Event buffers (and a lot of other structures) are allocated from 'a page
heap' which is grown whenever some allocation request can be satisfied
anymore. Unused objects are kept on type-specific free lists which are
consulted prior to allocating new objects. This is based on two
assumptions,

    - the system has enough memory so satisfy the demands of the
          running applications (otherwise, it must obviously fail to
          work)


    - if the daemon needed X elements of type T at some point in
          time in the past, it will need all X elements again at some
          time in the future