Hi,
Antony Stone <Antony.Stone@???> writes:
> On Saturday 11 October 2025 at 15:56:51, Peter via Dng wrote:
>
>> From: Antony Stone <Antony.Stone@???>
>> Date: Sat, 11 Oct 2025 10:45:23 +0200
>>
>> > With an EXTn file system, and especially over NFS, more files per
>> > directory is the *last* thing I need.
>>
>> Appears that in Ext4, files per directory won't be a problem.
>
> It might not be for the machine, but it is for me.
>
> I don't want to be searching through 10,000+ files when I could be searching
> through 50 directories and then 200 files in the chosen directory.
Actually, some object-based storage approaches use a similar approach
for the same reason: better search performance.
Most use hashes for the file names and take the first one or two bytes
of the hash to create directories. If you're using git, have a look at
any given .git/objects/ directory to see it in action.
>> > Also, if my . file containing the directory itself gets corrupted,
>> > I'm a lot worse off, the more files there were listed in it.
>>
>> Yes, reliable backup is essential.
>
> Agreed, however once again I would far prefer to have to restore one directory
> of 200 files (assuming the other 49 are still okay and didn't get corrupted)
> rather than the single directory containing all 10,000 files.
Hope this helps,
--
Olaf Meeuwissen