Transcripts

Hardening Lightning

Date

30 January, 2018

pencil icon

Transcript by

Bryan Bishop

https://twitter.com/kanzure/status/959155562134102016

slides: https://docs.google.com/presentation/d/14NuX5LTDSmmfYbAn0NupuXnXpfoZE0nXsG7CjzPXdlA/edit

previous talk (2017): http://diyhpl.us/wiki/transcripts/blockchain-protocol-analysis-security-engineering/2017/lightning-network-security-analysis/

previous slides (2017): https://cyber.stanford.edu/sites/default/files/olaoluwaosuntokun.pdf

Introduction

I am basically going to go over some ways that I've been thinking about basically hardening lightning network, in terms of making security better and making the client more scalable itself, and then I'll talk about some issues and pain points that come up when you're doing fees on mainnet. And there's going to be some relatively minor changes that I propose to Bitcoin in this talk. We'll have to see if they have a chance of getting in. They are not big sweeping changes. One is a new sighash type and one is a new opcode and then there's covenants but that's an entirely separate story itsef of course.

Talk overview

A quick overview first. I am going to give an overview of lightning's security model. I am not going to go much into lightning's overview because I kind of assume that you at least vaguely know what lightning network is. Payment channels, you connect them, you can route across them and that's the jist of it. We're going to talk about hardening the contract breach event. Some of you have talked about ways that we can do that in terms of addin more consensus changes, or going at it from the point of view of a stratey when a breach actually happens and there's a large mempool backlog. Then, I am going to introduce a new lightning channel type and a new channel design. Basically making the channels more succinct, meaning you'd have to store less history and things like making outsourcing a lot more efficient as well. And then I am going to talk about kind of like outsourcing and a newer model and a model that assumes that-- that maintains client privacy as much as possible, because if you want this to scale in the outsourcing to support a large number of clients then we want them to store as little state as possible. And then we're going to go into making lightning more succinct on chain. If it's an off-chain protocol, then we want to make sure it has the smallest on-chain footprint possible, otherwise it's not really scaling because you're hitting the chain every single time. It should be off-chain itself.

Security model

There's a diagram of the layers of lightning over here. People always say it's like "layer 2". But to me there's a lot more layers on top of that. To me, layer 1 is bitcoin and the blockchain itself. Layer 2 is the link layer between channels. This is basically how do I open a channel between myself and Bob and how do Bob and I actually update the channels. Then there's end-to-end routing and the HTLCs and onion routing and whatever else is involved there. And then you have an application layer for things being built on top of lightning, such as exchanges.

I had emojis on this slide, but I guess it didn't translate somehow. That's okay.

The way that lightning works is that it uses bitcoin or another blockchain as a dispute-mediation system. Instead of having every transaction go on bitcoin itself, we do contract creation on bitcoin itself. We can do enforcement there. But the execution is off-chain. We do the initialization on the blockchain, then we do the execution on the side off of the chain. This makes it succinct and that's fine. We treat the chain as a trust anchor. Basically the way it works is that any time that we have a dispute, we can go to the blockchain with our evidence and transactions and then we can publish these transactions more broadly and we can show that hey my counterparty is defrauding me he is violating some clause of the lightning smart contract. And then I get all my money back, basically.

We have the easy way and the hard way. Optimistically, we can do everything off-chain. If we want to exit the contract, then we do signatures between each other and it goes to the chain. The hard way is in the case where I have to go on-chain to actually do a property dispute because of a violation in the contract. When we're handling disputes, the way we think about it is that we can write to the chain "eventually". This is configured by a time parameter called T which is like a bip112 CSV value. It means that every time someone goes to the chain there's a contestation period that is open for time period T to basically refute their claim. We say "eventually" because T can be configured. You want to configure "T" based on the size of the channel. If it's a $10 channel then maybe you want a few hours, and if it's a million dollar channel then maybe you want T to be closer to a month or whatever.

The other thing we assume is that miners or pool operators are not colluding against us. They could censor all of our transactions on the chain. As part of this assumption, we assume a certain minimum level of decentralization of the blockchain miners because otherwise you can get blacklisted or something. There are ways that you could try to make the channels blend in with everything else and all of the other transactions occurring on the blockchain, and there's some cool work on that front too.

Strategy: Hardening contract breach defense

Moving on, let's talk about hardening contract breach defense strategy. Something that comes up a lot and people ask this is, how do I handle contract breach in the case of massive backlog? This would be the case where, for whatever reason, my fees are sky high, someone is spamming the chain and there's a contract breach. What am I going to do from there on? There's some things that people have worked on for adding new consensus changes to bitcoin, such as the timelock would stop after the block was ever so full or possibly you would have some pre-allocated arena where you could do battle for your smart contract. This is looking from a bit more strategic standpoint and looking at the dynamics on the fees in terms of handling the commitment transaction itself.

Whenever someone tries to broadcast a prior state, perhaps out of order, they are basically locked to that particular state. Bob basically had $2 on the newest state and he had $10 on the other state. He's going to go back to the $10 state. At that point, Bob can only access just his money from the channel itself. Bob revoked some of the outputs. What his counterparty can do is basically start to progressively siphon Bob fees basically into miner fees. This is now a situation where there's some strategy there because Bob has two options: he can either stop, or I can keep going and I'm eventually going to siphon all of his money to miner's fees. The only way that his will actually succeed to get that prior state transaction into the chain is if he pays more in miner's fees than he actually has in his balance himself. You can either do that by using child-pays-for-parent. I think this is a pretty good approach. What we're doing here is that the justice transaction is going to be using replace-by-fee, we're going to use that to progressively bump up and the fees the money of the cheater basically into the fees. I think this is a pretty good stopgap for the way things work right now. If you have someone that can be malicious at all, you can almost always penalize this person. So even if there's a massive backlog, assuming that the miner wants the higher fee rate, then well you know Bob basically gets nothing. This adds further incentive from actually, people trying to do cheating, because now there is this strategy where basically I can just give away your money to miner's fees and the adversary gets nothing. I'm still made whole through this entire ideal, so whatever happens I'm okay, I want to penalize and punish Bob in the worst way possible and I'm going to jump into the queue in front of everything else in the mempool. I could have put maybe 20 BTC towards fees- I'm fine, I'm just punishing my counterparty, it's scorched earth, and then I wash my hands and walk away and everything's okay.

Reducing client side history storage

https://www.youtube.com/watch?v=V3f4yYVCxpk&t=5m45s

Now we're going to talk about scaling lightning itself. Basically, from the client side. The way it works is that because contract execution is local, we have a local transcript. Every time we do a new state update, there's basically a shadow chain, and by shadow chain I mean that we have a "blockchain" in the sense that each state refers to previous state in a particular way but also we can do state transitions between these states themselves. The shadow chain is only manifested on the chain in the case of a contract breach or if I just want to force-close. Typically in the normal case we do a cooperative close which is where both sides sign a multisig, we go on chain, you see nothing else. The shadow chain is only manifested on the blockchain in the case of a breakdown or some sort of breach remedy situation.

These state transitions in the channel can be very generic, later this year we might get fancier with some cool stuff. Right now it's just adding HTLCs, removing HTLCs, and keeping track of prior state. The goal here is to reduce the amount of state that the client needs to keep track of. It would be good if they could keep track of less state, because then they could have more high-throughput transactions. It has implications for outsourcing too. If the state requirements for the client to actually act on the contract is reduced, then it makes the outsourcers more succinct as well, and if the outsourcers are more succinct then people are going to run them and if people run them then we're going to have more security so it's a reasonable goal to pursue this.

Overview of commitment invalidation

Before we get into reducing amount of state, I am going to talk about some of the history of how to do commitment invalidation on lightning. You have a series of states. You walk forward in these states one-by-one. Each time you go to a new state, you revoke the old one. You move from state 1, you revoke it, state 2 revoked now you're on state 3. The central question to how do we channels is how do we do invalidation. One thing to note is that this only matters fo bi-directional channels where you're going both ways. If it's a uni-directional channel, every single state update I do is basically benefiting the other participant in some way and they don't really have incentive to go back on the prior states, but if you have a bi-directional channel there's probably some point in the history where one of the two counterparties was better off, where they had some incentive to try to cheat and try to go back to that prior state. We solve this by invalidating every single state once we make a new state. The penalty here is that if I ever catch you broadcasting a prior state then you basically get slashed, your money all goes away and there's some strategy there which I have already talked about a bit. Naievely, you keep all the prior states. But that's not very good because now you have this linearly growing storage as you do these updates. People like lightning and other off-chain protocols because they can be super fast, but if you have to store a new state for every single update then that's not very good. The other thing is that, say you're going to the blockchain to close-out the prior state but that's not very good because now you have control transactions going into the chain which isn't really good if you're trying to make everything succinct itself.

History of succinct commitment invalidation

So let's get intothe history of and how we currently do commitment invalidation.

When one of the first bi-directional channels was proposed, it was basically a bip68 mechanism with decrementing sequence locks. Use relative timelocks such that atest states can go in before the prior states. The drawback was that this had a limit on the number of possible state updates. bip68 is basically this relative timelock mechanismn. You'd start with 30 day timelock, do an update, go to 29 days, then do an update, then it's 28. The way that this enforced latest state was by you could only broadcast latest state using the timelocks. If you had state 30 then I'm going to broadcast state 28 before you can broadcast that because the timelock isn't going to be expired yet. The drawback is that this has a limited number of possible updates, like if it's a 30 day locktime then with a 1-day period that's only 30 updates at most. You have to bake that into the lifetime of the channel.

Moving on, there was another proposal called a commitment invalidation tree, which is used in duplex payment channels (cdecker). You keep the decrementing timelock thing, but then you add another layer on top of this which is basically a tree layer. Because of the way that timelocks work, you could only ever broadcast-- you can't broadcast the leaf until you broadcast the root itself. So basically it also because they were bidirectional channels they had to be reset periodically. They were constructed using two uni-directional channels and when the balances got out of whack then you had to reset that. Every time you reset it, you had to decrement the timelock, and then every time you do that you had to measure this root thing, and that worked out pretty well because that would let you, with a tweak, call the kick-off transaction and then you have an indefinite lifetime channel. Pretty cool, but the drawback is that you have this additional off-chain footprint. If you had this massive tree that you needed to broadcast, to get to your certain state, then it's not very succinct because you have to broadcast all of those transactions, which is not particularly good if our goal is succinctness.

What lightning does now is called commitment revocations (hash or key based). Must reveal secret of prior state when accepting new state. The drawback is that you must critically store order log n of remote party, more complex key derivation, and there's asymmetric state. We use commitment revocations where, the way it works, every single state has a public key and when you transition to the next state you basically must give up that private key. It's a little more complex than that. The cool part of this is that we had figured out a way to generate the public key in a deterministic way. The client constantly has a state and there's this tree type, choose a random number generator, generate all these secrets, and you as the receiver can collapse all of these down into basically the particular states themselves.

The goal here is to develop a commitment scheme with symmetric state. Commitment revocations in lightning right now have asymmetric state. This is basically due to the way we ascribe-- we know what our transactions look like, if you broadcast mine on chain then I already have what I need. But that can get worrisome like multiparty channels... if we could make them symmetric, then multiparty channels wouldn't have this weird combinatorial state blowup, and also we could basically make all of the state much smaller itself.

Commitment invalidation in lightning v1.0

So here's a review of basically of the way we do commitment invalidation in lightning right now. Well, I put v1.1 on the slide. But we're on v1.0, I don't know. I guess I was thinking ahead. The way that it works is that every single state has this thing called a "commitment point", which is just like a EC base point. The way we derive each of the private keys for each of these points is using a shachain. You can think of a shachain as having a key k and you have an index i which gives you random element in i, and me as a receiver because of this particular structure I can collapse them. Any time I have a shachain element 10 I can forget everything else and re-derive the data by only knowing the shachain 10 were the parameters, more or less.

We do this key derivation scheme which is like kind of complex-ish but important to takeaway here that when we do the state update, you give me a public key, and then I do some elliptic curve math where it turns out that once I reveal the private key to this thing, then only you can actually sign for that state. We make one of these commimtent points, and one of the revocation points, and when we go to the next state then you basically reveal that to me and that's how it works.

This one has a few drawbacks. It gets involved because we were trying kind of defend against rogue key attacks and things like that. But I think we can make this simpler. The client storage has to store the current state and this log k state, and it's log k where k is actualy the number of state updates ever. The outsourcer needs a signature for every single state and in addition to that needs a signature for any other HTLC we have and it basically needs to collapse the log k state itself. So we need to make this simpler and more succinct.

OP_CHECKSIGFROMSTACK

https://www.youtube.com/watch?v=V3f4yYVCxpk&t=12m15s

And now for a brief diversion.

I am proposing an addition to bitcoin called OP_CHECKSIGFROMSTACK. Bitcoin checks signatures. You have lots of checksigs when you're validating the blockchain itself. One thing about bitcoin is that, it's always assumed to be this thing called the sighash. Any time there's a signature operation, implicitly we generate this thing called the sighash which is derived from the transaction itself. It's a heuristic function where you can control what is actually being signed itself. That's cool, but it's a little bit restrictive. What if we could add the ability to validate signatures on arbitrary messages? This is super powerful because it can let you do things like delegation, have someone's public key if it's signed then take this output. We could also use this to make oracles, like an integer signed by Bitfinex and we could use this inside of a smart contract. We could also do things like having these "blessed" message structures where your protocol has a message that might be opaque but has a particular structure and it can only be signed by both participants. So you could sign this at some point in the future and use this as evidence some point later on.

This proposal isn't particularly soft-fork safe for bitcoin yet. But basically you have message, signature, public key, and maybe a version and different versions in the future. Or we could have ECDSA or Schnorr signatures or whatever else. The opcode would tell you if it's valid or whatever.

Signed sequence commitments

Now on to a new commitment invalidation method. This is something I call signed sequence commitments. Rather than now us using this revocation model, what we do is every single state has a state number. We then commit to that state number, and then we sign the commitment. That signed commitment goes into the script itself. This is cool because we have this random number R so when you're looking at the script you don't know which state we're on. To do revocation, we could say, if you can open this commitment itself and because it's signed you can't forge it because it's a 2-of-2 multisig, so we can open the commitment and then show me another commitment with an R sequence value that is greater than the one in this commitment and that means that there was some point in history where two of you cooperated but then one of the counterparties went back to this prior state and tried to cheat. So this is a little bit of a simpler construction. We have a signed sequence number, you can prove a new sequence number, and we can prove it because we hide the state of it, because it can be the case that when we go to the chain we don't necessarily want to reveal how many state updates we've actually done.

Signing is pretty simple: you have this number R which we can derive from some deterministic method. We increment the state number. We have c, which is the commitment. The signature is important, it's actually an aggregate signature. It's the signature between both of us. There's a few techniques for this, like two-party ECDSA multiparty computation techniques, there's some signature aggregation stuff you could do, just somehow you collaborate and make a signature and it works.

One cool part of this is that whenever I have state 10, I don't need the prior states anymore. I don't need any of the other extraneous states. 10 is greater than 9 and all the prior states. I now have constant client storage which is really cool. So we can have a million different states but I only need to keep the latest and this tuple thing with the signature, the commitment, opening the new commitment itself. That's pretty cool because now outsourcers have constant-sized state per client. Before, it would grow by some factor the size of the history, but now it's constant in this scheme, which is a pretty big deal. It's also simpler in terms of key derivation, the script, things like that. We're just checking a signature.

There's a state machine in BOLT 2 where you update the channel state, but that doesn't need to be changed to implement signed sequence commitments. We've dramatically simplified the state. The cool thing is that, because the state is symmetric now in mutiparty channels there's no longer this combinatorial blowup of knowing or tracking who has published and what was their state number when I published and who spent from... it's just, it's way simpler to do signed sequence commitments.

Review of state outsourcing

https://www.youtube.com/watch?v=V3f4yYVCxpk&t=15m55s

Signed sequence commitments can make state outsourcing more scalable. Outsourcing is used when a client isn't necessarily able to watch the chain themselves. In the protocol, the channels are configured with a timelock parameter T. If the channel user is offline and the outsourcer detects an invalid state broadcast on chain, then the outsourcer can act within this period of time T and punish the other counterparty. The cool part is that because we can outsource this we can have lite clients. In the prior model, we send them these initial base points that allow them to re-derive the initial redeem script itself. When they see a transaction on chain, what we do is encode the state number of the commitment into the sequence number in the log time. The outsourcer sees the transaction on chain, looks at the sequence number, basically extracts that and sees what state they are on. Depending on what state they are, they need a different payload itself. This is what we do for the HTLCs and this is cool because they can collapse down the revocation storage into a single thing, and we also give the outsourcers a description of what the justice transaction looks like. That's where the transaction pulls away the money from the cheating party or whatever. We use bip69 basically that allows us to have deterministic ordering of the inputs and outputs for everything itself. For HTLCs, they require a new signature for every HTLC that was there. We do something with the txid. We have the full txid of the state, which we assume they can't predict because it's random what's in there. We encrypt the blob, we basicaly have that txid. Now when they see the txid on the chain, they can see if it decrypts, if it doesn't then nothing happens, and if it does then they can do that and get that for it.

There's some open questions here about doing authentication. You don't want someone to connect to the outsourcer and send a bunch of garbage, like sending them gigabytes of like random data. Instead, we want to make it somewhat more costly, which could be done with a ring signature scheme or it could be a bond or a bunch of other things. There's also some conversations and questions there like do you pay per state, do you pay when they do the action, is it a subscription model? We're working on these questions and they will be answered.

Lighter outsourcers

For outsourcers, this is the really cool part. Every single outsoucer before had to store either the encrypted blob or the prior state for every state. If you did a million states, that state storage can get really large. If they have a thousand clients with a million states each, it starts to get really infeasible and prohibitive.

With signed sequence commitments, the situation for outsourcers is much better. We only need a single tuple per client. The outsourcer sees the commitment, the state number, s is the signature, R is the randomness used in the commitment. We're going to replace this every single time. The outsourcer has 200 bytes constant storage for each client. That's all that needs to be updated per state update. No additional state needs to be stored.

What's cool with shachain is that the old point, in the old scheme, you had to give the outsourcer every single state contiguously. If you ever skipped a state, it wouldn't be able to collapse the tree. You had leafs and you could collapse to the parent. Without one of the leaves you can't collapse the tree. But with signed sequence commitments, I can give you whatever state I want, and I can basically live with that little particular risk if I want to skip sending some updates to the outsourcer.

In the signed sequence commitments scheme, we're just going to use that now encrypted blob approach, we're going to add revocation type which tells outsourcers how to act on the channel itself, we're going to give them the commitment, and the revocation info which is how they do the dispatching itself. This is pretty cool itself because now I just have constant state. But it's a little bit different because if I want to enforce how the outsourcer can sweep the funds, then I need a little bit more, I need to give them the signatures again. As is right now, hey outsourcer if you do your job then you can get all of Bob's money. It's kind of a scorched earth approach. But maybe I want to get some money and maybe there's some compensation or something and we could fix that as well.

Covenants

I skipped or deleted a slide. That's too bad. You can basically use covenants (here's a paper) on this approach to eliminate the signature completely. You only have one constant-sized signature. The covenant would sign Bob's key and Bob can sweep this output but when he sweeps the output he has to do it in a particular way that pays me and pays him. This is cool because I can give him this public key and this tuple and you can only spend it in that one particular way. I can add another clause that says if Bob doesn't act within three fourths of the way to the timelock expiration then anyone can sweep this. If you're willing to sacrifice a bit of privacy, then you don't have to worry about breaches because anyone can sweep the money and if anyone tries to breach then miners are going to do so immediately themselves anyway.

Outsourcer incentivization

https://www.youtube.com/watch?v=V3f4yYVCxpk&t=20m35s

Ideally the outsourcers are compensated because we want people to have a good degree of security and we want them to feel that everything is safe and if someone tries some infidelity then they will be punished. Do you do it on a per state basis, or do you give the outsourcers a retribution bonus? Using a retribution bonus, you could say maybe they say alright I get 10% of the value of Bob's funds or whatever and we could negotiate that. And they have to stick with this because the signature I give them covers this 10% and if they ever try to take more than 10% then it would be an invalid signature. If anything is violated then everything else will fail itself.

Ideally, breaches are unlikely. They shouldn't happen often. Maybe there's like 3 breaches a year, everyone will freak out, and then everything will be fine afterwards. Maybe we should actually do a pay-per-state model where I would pay the outsourcer for every single state. Periodically, they could actually remove this, but the cool thing about this new channel design is that the state is constant so you're only replacing something. They are not allocating more state per client in terms of history.

There's a cool idea floating around on the lightning-dev mailing list and this guy Anton implemented this as well, it's basically each outsourcer has its own specific ecash token. I pay them maybe over lightning, maybe with a zero-knowledge contingent payment, or maybe I just give them cash, but I have these outsourcer's ecash tokens now. I basically unblind the token if I receive them from them. The cool thing is that I can use the tokens and they aren't linked to my initial purchase. Every time I send a new state, I also pay the outsourcer with the token and he's fine with that because he got compensated for it for some reason.

I think this is, we could do both, it just depends on how frequently we're going to see breaches in the future and how this market is going to develop around the conversation of outsourcers and paying outsourcers.

Scaling outsourcing with outsourcer static backups

One thing we could do is that some people talk about how do you actually do backups in lightning itself. This is a little bit of a challenge because wallets are more stateful. In the case of a regular on-chain wallet, you basically just have your bip32 seed and if you have that seed you can scan the chain and get everything else back. But in lightning, you also have channel data, which includes parameters of the channel, which pubkeys were used, who you opened the channel with, the sizes, etc. There's static state, and then there's dynamic state which you outsource after every state update with the outsourcer.

Maybe we could ask the outsourcer to also store this very small payload, like a few hundred bytes at most. We can use the same scheme and we can reuse our sybil-resistant authentication scheme to allow the outsourcer to allow the user to retrieve the blob from the outsourcer. The user can lose all of their data, and as long as they have their original seed, they can reconstruct the data deterministically, authenticate with the outsourcer, get my backup blob, and then rejoin the network itself. There's a risk that backup is out of date. We have a-- in the protocol, when two clients reconnect, they try to sync channel state. If the channel state is off, they can prove to each other that they are on a particular state, and the value that they give to the user allows the user to sweep the transaction on chain. I think this covers the backup scenario pretty well and it depends on the people who are running the outsourcers or the watchtowers themselves.

There's a question of basically, who watches the watchtowers? Who makes sure they are actually storing the data? You can use a proof-of-retrieviability protocol over static and dynamic states they are storing. So ask a watchtower to make sure that they are storing data, and if they are, provide me with the data. And if not, then you can pay for this again just like paying the watchtower but this time it's a watchwatchtowertower.

Second stage HTLCs

https://www.youtube.com/watch?v=V3f4yYVCxpk&t=24m5s

The way that HTLCs work currently is that they aren't just shoved into the commitment transaction because we realized there were some issues with shoving them into the timelocks. What happened before is that if you wanted to have a longer CSV value which was your security parameter in terms of you being able to act on something in a timely manner, then you needed a longer timelock. As you had longer routes, you would have timelocks of 30, 40, 50 days, a very long time. So we wanted to separate that.

Now the HTLCs when claiming it's a two-stage process. Once you broadcast the second HTLC, you're waiting for the claim process which is a CSV delay value and afterwards you can claim. The CSV value and HTLC value are in two different scripts and they are fully decoupled from each other. This is in the current commitment design in BOLT 3 where CSV + CLTV decoupled in HTLCs.

There's some downsides, which is that we have a distinct transaction for every HTLC. Every time we update a state, if we have 1000 HTLCs, then I need to give you a thousand signatures and you need to verify all of those signatures. And we need to store every single signature for every HTLC. If we want to eventually be able to handle breaches in the future, and the outsourcer needs to be able to handle these HTLCs themselves.

The solution here is to use covenants in the HTLC outputs. This eliminates signature + verify with commitment creation, and eliminates signature storage of current state. We're basically making an off-chain covenant with 2-of-2 multisig. We can just say, if we actualy have real covenants then we don't them anymore. The goal of the covenants was to force them to wait the CSV delay when they were trying to claim the output. But with this, they can only spend the output if the output that it created in the spending transaction actually has a CSV delay clause. You basically add an independent script for HTLC revocation clause reusing the commitment invalidation technique.

As a stopgap, you could do something with sighash flags to allow you to coalesce these transactions together. Right now if I have 5 HTLCs, I have to do that on chain, there's 5 different transactions. We could allow you to coalesce these into a single transaction by using more liberal sighash flags, which is cool because then you have this single transaction get confirmed and after a few blocks you can sweep those into your own outputs and it works out fine.

Multi-party channels

There's some cool work by some people at ETH doing work on multi-party channels ("Scalable funding of bitcoin micropayment channel networks"). It's a different flavor of channels because there's some issues about how to make the hierarchy flat, paying for punishment, if I have to update my own state then do I have to wait for everyone else to be online. What they did instead was they made this hierarchical model of channels itself.

There's a few different concepts. There's "the hook". It's the- if there was five of us, there's a 5-of-5 multisig. Assuming signature aggregation, we can aggregate that down into a single signature and it looks like a regular channel and even though it might be 200-party channel, it can be made more succinct using signature aggregation.

Once you have the "hook" which creates the giant multisig itself, then you have "allocations" and what the allocations do is that they decide on the distribution of funds inside each channel. If you look at the diagram, it's just a series of 2-of-2 channels. You can keep doing this over and over again. The cool takeaway here is that they realized you could embed a channel inside another channel and then do recursively deep channels. Every time you make a new channel, it can have its own completely different rules.

I could have a bi-directional channel on one side, and on the other side I could have a muti-party channel and all of these other things. Nobody outside would know. We could do all of this collaboratively off-chain without using any on-chain transactions themselves.

This is pretty cool but there's some issues with state blowup on chain with this. One of the reason why they didn't do key-based revocation with this is because you have asymmetric state and you have this blowup that scales with the number of participants and that gets prohibitive. In the paper, they used invalidation tree again. Any time you want to update the hook, like if you want to do different allocations or decide that you want to get rid of one of the parties then you have to update the hook eventually depending on the distribution of the channel itself. And this would require an invalidation mechanism. They use tree-based invalidation using the invalidation tree but the underlying problem is that it can get really large. As you update the hook more and more, you have a tree with a bigger depth. You need to traverse a huge tree to get to the hook, but even below that there might be even more transactions. So you could have hundreds of transactions on chain just to force-close.

However, we could reuse the signed sequence commitments to solve some of these problems. We could use this for the hook update. This makes it more scalable. Everyone would have constant sized state independent of the number of participants in the channel, assuming that I'm keeping track of my own individual transactions. This has the same benefits as far as outsourcing. If we're using signature aggregation, then everything is very very tiny, and that works out well too.

Fee control

https://www.youtube.com/watch?v=V3f4yYVCxpk&t=29m

One issue that we realized is that it's not super easy to modify the fees in the channel once it's been broadcast. Let's say that my peer is offline and I want to force-close the channel. That fee on the channel itself is actually locked ahead of time. Say I was offline for two days and fees on the chain go up and my transaction might have insufficient fee to get confirmed in a timely manner. At this point, I could use child-pays-for-parent where I create a bigger fee package with a higher fee rate but I can't do that because the output is timelocked so I would need to wait for the timelock in order to sweep it, and in order to sweep it I would need it to be confirmed, and in order for it to be confirmed, you're in a weird position right? So one idea is to have a reserve... what this means right now is that, the way the design works, we ensure that each participant has a certain percentage of money inside the channel at all times. The idea here is that you create an anyonecanspend hook reserve hook output that anyone can use to anchor the transaction now. Even though I mess up guessing my fees initially, I can still use the anyonecanspend output.

We can eliminate the fee guessing game. In the protocol, there's an update fee message, which can go because of the fee going up or down. If there's very rapid changes in the fee climate on the chain, then this can get kind of difficult. Fee estimation is a hard problem.

The solution is that you don't want to apply fees to the commitment transaction at all. It has zero fees. You have to add a new input to get it confirmed. So we propose using more liberal sighash flags to allow you to add an input or an input and an output. After I broadcast, that's when I need to decide what the fees are. Whenever I add this fee input (or whatever), the txid of the transaction actually changes, and if the txid changes and if we had any dependent transactions then those all become invalidated. So what we do is we sign that with SIGHASH_NOINPUT.

I don't know if you remember it, but SIGHASH_NOINPUT was proposed a long time ago. SIGHASH_NOINPUT is so that you can sign the txid of the transaction that made the input and the position of the input. We're saying don't sign that. Only sign the script. I could have a dependent transaction that is still valid, even if the parent transaction changes, because the pubkeyscript stays the same. This gives you more control for confirmation time of your output and other things like that. You don't have to worry about fees and you can regulate fees completely, gives more control of your confirmation time of your output.

Q&A

Q: To sum it up, the upgrades for bitcoin script would be helpful are covenants, SIGHASH_NOINPNUT, and OP_CHECKSIGFROMSTACK.

A: Yeah. With SIGHASH_NOINPUT and OP_CHECKSIGFROMSTACK we could do a lot. Covenants are the holy grail and I didn't even get into what you could do there. There's some controversy around that. I think it's more likely for OP_CHECKSIGFROMSTACK to get in. These are minor additions, unlike covenants. The possibilities blow up because you have signed data. It's extremely powerful. If you have OP_CHECKSIGFROMSTACK then you might accidentally enable covenants.

Q: There have been a bunch of discussions about how large centralized lightning network hubs could lead to censorship. Have you done any research on how to keep the lightning network decentralized?

A: There's zero barrier to entry. There's some small costs, it's not high. The capital requirements to have a massive hub are really prohibitive. There's not really censorship because depending on the routing layer, we basically use onion routing at that point. They don't really know exactly who you're sending money to. They could cancel the transacion but then you route around them.

Q: I was curious if there was a limit to the number of state channels that you can have at one point. If not, would the creation of unlimited state channels in a sybil manner would that be an attack vector?

A: The upper bound is the number of UTXOs you can have in the UTXO set. Here we had one output with 20 channels. It's effectively unbounded. You need to get into the chain. With sybil attacks, you're paying money to get into the chain so there's some limits there.

Q: There was an assumption about attackers not colluding with miners.

A: Let's say there was a miner and there's this massive channel and my counterparty colludes with them. They could choose not to mine my justice transactions. You could say that if there's only that 1% chance to get into the chain then that might be enough for fraud to occur. This can also get weird if miners are reorging blocks with my transactions. The committed transactions format, maybe with a zero-knowledge proof, only later on would it be revealed what's being spent. Commit-reveal could get past this problem of miner censorship or whatever.

Q: I am curious about the- as you upgrade the protocol, how do you anticipate an upgrade schedule for I guess there's the wallets, hubs, outsourcers?

A: There's actually a bunch of extension points in the protocol, including the onion protocol and the service bits on lightning node messages. We share the layer 3, HTLC layer. Individual links might have different protocols but they share the same HTLCs. We could have feature bits that we can check to see if someone knows the new fancy channel tech. I foresee it being gradual. Even in the future there will be many channel types and many routing layers. Everything is at the edges, and that's all that really matters, and that's a cool design element of the way it turned out.

Q: I like this multi-party channel design.

A: They are very cool.

Q: Can you elaborate on some of the research work done for secure multi-party computation?

A: There's a bunch of protocols that are enabled by OP_CHECKSIGFROMSTACK where inside of the multi-party computation if you can generate signatures and those signatures sign particular states then you could use that for invalidation as well. Multi-party channels can be used to pool liquidity. There's opportunities for gaming, like one in each geographic region and all of the economy is done through a single multi-party channel and then you could route commerce between those giant channels. It still works itself out.

Cool, thank you.

https://www.reddit.com/r/Bitcoin/comments/7u79l9/hardening_lightning_olaoluwa_osuntokun_roasbeef/

Transcripts

Community-maintained archive to unlocking knowledge from technical bitcoin transcripts

Transcripts

Explore all Products

ChatBTC imageBitcoin searchBitcoin TLDRSaving SatoshiBitcoin Transcripts Review
Built with 🧡 by the Bitcoin Dev Project
View our public visitor count
We'd love to hear your feedback on this project?Give Feedback