On 22/11/17 14:18, Aldemir Akpinar wrote:
>
> That's routine. Few readers read everything that can be read. For
> example, look at postgres. Its binary file format reveals quite a
> bit more than you can get using psql, and by design: The writer
> and binary format are intended for storing things quickly and
> reliably, and the reader for reading what was stored. Anything
> that's in the file but wasn't stored by instruction of an SQL user
> is uninteresting to psql, and the file format writer has no
> particular reason to avoid storing other information.
>
>
>
>
> Could you elaborate why are you comparing a relational database system
> where its files must be binary with a logging system where its files
> doesn't need to binary?
>
Need? Nothing "needs" to be in binary[*]. It's a design decision. Do
the advantages of a structured format (mostly speed) override the
disadvantages (higher costs for access if the reader software is
unavailable?
[*] or, to put it another way -- *everything on a computer is in
binary*. "Text" files are binary. The question is how easy is it to
decode the file format. It seems obvious that a "text" file is easy to
decode, everyone knows the format (but what character set is it in?),
but don't forget that the "text" file is stored on a filesystem, which
is itself a complicated "binary" structure. When you're talking about
"forensics", i.e. looking at something that may be broken in exciting
ways, it's quite naïve to assume that you can just mount the filesystem
(which one?) and use cat, vi, grep or whatever.