:: Re: [Libbitcoin] Satoshi client: is…
Top Page
Delete this message
Reply to this message
Author: Jorge Timón
Date:  
To: Eric Voskuil
CC: libbitcoin mailing list
Subject: Re: [Libbitcoin] Satoshi client: is a fork past 0.10 possible?
On Tue, Feb 3, 2015 at 4:32 AM, Eric Voskuil <eric@???> wrote:
>>> For an alternative consensus to arise in this likely future scenario,
>>> the "unknown" miners need to be able to form their own consensus. This
>>> requires credible alternative implementations and maintainers with
>>> experience to support them and help drive towards agreement.
>>>
>>> As we see presently, for an implementation to be credible it needs to be
>>> in-use. That can take a long time to develop, but ultimately it brings
>>> more people to the table to guide such choices, and may even help to
>>> forestall significant conflict down the road.
>>
>> Remember, no majority of miners can force users to accept hardfork
>> changes to the consensus rules.
>
> That statement doesn't make sense. If no majority can do it, then it
> can't be done. You seem to believe that a hardfork requires universal
> acceptance. This is not the case.


No majority of miners can force a hardfork. Majority of USERS.
Universal acceptance by users instead of just a majority?
I don't know, honestly. I think it should be something
uncontroversial that can be universally accepted (like getting rid of
obscure bdb-specific consensus rules, that was the last hardfork).
My point is that users decide the rules they're checking and miners
can't change that.
If 95% of the miners change the subsidy schedule to produce more than
21 M but users reject this change, they will just listen to the
blocks produced by the honest 5%.

>>>> Please try to come up with an example of something that could go wrong
>>>> with different bitcoin implementations sharing libconsensus code,
>>>
>>> It has already happened that a bug has been pushed out broadly,
>>> requiring a temporary shut-down of the entire network. Imagine a
>>> scenario where there are many disparate code bases. The impact of a
>>> failure caused by any one of them is minimized.


Why this cannot happen with libsecp256k1 ?
I don't think you're answering to the question. In which way it is
more risky to depend on libconsensus than to depend on libsecp256k1?

>>>> it is very hard for me to argue about "decentralization" and "strength"
>>>> in this abstract and vague way.
>>>> Please explain to me in which ways libconsensus implies risks that
>>>> libsecp256k1 doesn't, for the shake of your examples.
>>>
>>> The curve library implies the same technical risk, but is independent of
>>> the political risk.
>>
>> There's no political risk on libbitcoin using libconsensus.
>> libconsensus cannot be forced to upgrade so no new checks can be
>> pushed onto libbitcoin.
>> If, for example, libbitcoin wanted to implement a softfork change that
>> the official distribution of libconsensus doesn't implement,
>> libbitcoin can implement the additional checks outside of libconsensus
>> (or just fork libconsensus).
>> Seriously, I don't think there's any "political risk" in libconsensus.
>
> Your understanding of hardforks and lack if visibility into the source
> of the leverage that the Foundation has with miners seems to be why you
> are missing the larger issue.


I believe I have a solid understanding hardforks and softforks so
must be the other thing about the foundation, which honestly doesn't
interest me much.
Still, if an undesired change is introduced to libconsensus you can
fork it, just like you would do if a backdoor was introduced in libsecp256k1.

>>>>>> I would at least suggest to use libconsensus for testing
>>>>>> your script interpreter implementation and your signatures checks.
>>>>>
>>>>> Consider for a moment what this implies. In order to run any such
>>>>> testing we need test vectors to run through both implementations. There
>>>>> is no other way to perform such testing, and notably there is no need
>>>>> for shared code.
>>>>
>>>> Well, even running random bitstrings in both interpreters would be
>>>> useful, as any difference in results would be a symptom of an
>>>> extremely dangerous bug.
>>>
>>> Fuzzing is simply a means for generating test vectors.
>>
>> Yes, and it is still not enough for having 100% certainty that
>> libbitcoin consensus code is functionally identical to other nodes'
>> consensus code.
>> All I was saying here is than even libbitcoin depending on
>> libconsensus only for tests would be something good.
>>
>>> I'm not arguing for a libconsensus replacement, just pointing out that
>>> comparative testing is not an integration scenario.
>>
>> Sure, and I was just saying that since there's no test vectors that
>> can cover all cases, running random test against libconsensus would
>> increase the testing surface.
>
> If some subset of tests change on each run, and the code changes as well
> (otherwise why retest?) then we have added a moving window of tests over
> a moving window of code. Granted this is an informal analysis, but you
> could just as easily fix your randomly-generated tests and changing code
> will likely cover the same increased surface.


Retesting without changing the code makes sense if some of your tests
are random.

> Furthermore, given that we're talking about randomly changing random
> tests over a practically infinite space, is practically zero percent of
> the surface area.


What? The more random tests you run the more surface of that
practically infinite space you cover.
This seems obvious to me.

> Also, in order to be useful test results need to be repeatable, which
> means there is a new requirement to be able to recover the random test
> vectors for any run.


Of course you will be able to reproduce any random tests that fails.
When testing libsecp256k1 by comparing its results with openSSL's on
random tests, one of the random tests failed, just once.
It turned out that openSSL had a very specific bug that had been
there for years.

>> I agree. Hopefully will get there but a previous step is isolating all
>> the consensus code within bitcoind (decoupling it from other
>> functionality).
>
> Yes, and this underscores my point. Once you pull it apart, move
> libconsensus to it's own repo, and expose only the VerifyScript call,
> you will find that you have had to maintain a significant portion of the
> code that implements VerifyScript in both bitcoin and in libconsensus.


Ok, I finally see what you're saying. Separating libconsensus would
require code duplication since, for example, the script type will be
required to produce scripts to sign them.
That may actually a good reason to never separate libconsensus from
bitcoin core, or do so only as a sub-repository or something.

So I think your complain is that libbitcoin will have some code
duplication from libconsensus.
libconsensus will only have a c interface with simple functions.
So, no, a common type lib is not what libconsensus is about.
I don't think there's any way around this. Since libbitcoin has much
more functionality than libconsensus it will have to maintain much of
its current script code, because users of libconsensus need to
produce scripts.
Depending on libconsensus would only save libbitcoin reimplementing
the consensus checks, in this case only verifyScript (the script
interpreter).
Anything else will have to be maintained if libbitcoin wants to
preserve its current functionality.

> And it makes little sense for other libraries to use it if you don't.


Mhhmm, I guess we agree to disagree here.

> Developing for end-users is different than developing for developers.


Agreed. I'm not sure if you're implying that current libconsensus
design is for end-users...It is definitely intended to be used by developers.