Le 29/12/2015 16:34, Simon Hobson a écrit :
> Didier Kryn <kryn@???> wrote:
>
>> That's the logic one would naively expect but I'm not sure of it. I'm afraid the data remains in the cache and not backed-up to disk until some process needs room in the cache. You can do the experiment of writing data to a usb memory stick and then wait long after the light has stopped blinking. Then you can either sync or umount the device and it will blink again for some time before the command returns.
> Umount will cause some disk-i/o.
> But it is normal for dirty pages to be written at some point - and not just when the space is needed in cache. You can see this by watching the dirty pages value in /proc/meminfo. You can make some dirty pages, and after a while you'll see the value go down again.
Down to zero?
>
> Not having this cleanup would be a recipe for two things :
> 1) crap performance that might not be that much better than with no cache. In effect, instead of having to wait for your new data to be written, you'd have to wait for something else to be written to make room for it.
Who "have to wait" ? Apps don't have to: they get the data from
cache and write to cache. Maybe the disk-write policy depends on the IO
scheduler as the read policy does, but this layer is completely isolated
from the applications.
> As it is, it's not too hard to see this effect with certain workloads.
> 2) Almost guaranteed filesystem destruction - or at least massive data loss/corruption on system crash when the dirty pages don't get written.
>
Data was lost and filesystems *were* corrupted, at every such crash
until the advent of journalled filesystems. I started to install
Reiserfs many years ago to face this problem with crash.
Didier