Dear Dev1ers,
I have an oldish external 1 TB backup drive that is throwing up this
mount error. Drive is single ext4 partition about 70% full:
Error when trying to mount:
Failed to open directory "cstwo".
Error when getting information for file '/media/xxxxxx/cstwo/600':
Input/output error.
================================
Here's the GSmartControl Self-test log. Note that the drive only has 40
hours on it!
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining
LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed: read failure 90% 40
412352591
# 2 Extended offline Completed: read failure 90% 40
412352591
In the Attributes data the Reallocated sector counts are hilighted in
pink in 3 places.
================================
What are the chances of fsck repairing the bad sectors? I shamefully
admit I have not thought about fsck for years. IIRC, it used to run
automatically at boot (Squeeze?) which made boot times quite
frustrating. So I would occasionally run it manually at a more
convenient time with commands like this (from old saved bash history):
# fsck.ext3 /dev/sdd1
# e2fsck.ext3 /dev/sdd1
Now that prior knowledge has vanished - I can't even remember what
e2fsck is for - and the above commands may or may not even be valid in
2020. I actually read man fsck - at least most of it - and there is this
option which seems like it would be a good idea:
-N Don't execute, just show what would be done.
Soooo . . . will something like this give me any useful info? And if it
doesn't explode, can I just run the command itself?
# fsck.ext4 /dev/sdc -N
Or am I looking in the wrong direction. I am rather hardware-challenged.
:D
Thanks for slogging through this. Your advice is most welcome.
golinux
PS. There is no critical data on this disk because I always keep
multiple backups. But it would be nice to have that drive functioning in
some capacity.