:: Re: [DNG] Caching leads to unrespon…
Top Page
Delete this message
Reply to this message
Author: marc
Date:  
To: Joachim Fahrner
CC: DNG
Subject: Re: [DNG] Caching leads to unresponsiveness
> Linux uses all available more for caching of filesystems. When copying large
> files to slow network filesystems (nfs, smb, sshfs, davfs) it takes a long
> time until such allocated memory becomes free. When these network
> filesystems saturate memory linux becomes very unresponsive. It can take
> minutes to start applications.
>
> Is there a way to limit memory usage of network filesystems?


I can think of two, but both might require a bit of coding:

- open(2) the target file(s) with O_DIRECT or maybe O_SYNC.
I don't think cp(1) has that as an argument though.
And few remote filesystems support a sync flag...

- rate limit the file transfer - in other words issue
the write(2) calls at a pace which matches the bandwidth
available. Again, I don't believe standard cp(1) has
that feature. If it were me I would use the double
tar trick, but write a rate-limiting pipe program
to fit between them:

tar -c -C /home/source -f- . | pipelimit -r 2M | tar -xv -C /mnt/target -f-

where a simple pipelimit looks like:

     result = read(STDIN_FILENO, buffer, BUFFER_SIZE);
     usleep(small_delay);
     write(STDOUT_FILENO, buffer, result);


Conventional unix/linux assumes that filesystems are "fast", and
one doesn't have to worry about the link properties/bandwidth to
the disk - you are bumping your head against that assumption

regards

marc