Potzblitz! - The weekly Lightning talk #4 with Christian Decker and Michael Folkson
May 3, 2020 19:46 · 13019 words · 62 minute read
And we are live! Welcome to the fourth episode of Potzblitz, the weekly lightning talk. Today we have a very special guest Christian Decker of Blockstream. They also call him Dr. Bitcoin. And our co-host today is Michael Folkson from the London Bit Devs Meetup he was also doing a bunch of Socratic seminars and has a technical knowledge so he will be the perfect person to talk to Christian and has already prepared a bunch of questions we announce this on Twitter a couple days ago so he’s been collecting a couple of questions already if you do have questions during the stream put them in the YouTube comments please or ask um on Twitter using the hashtag pot splits and we also have a meta most Channel mm.fulmo.org the channels called Potzblitz and you can also find Christian there you’ll be checking it out later and as always once we’re done streaming you can also join the meta most channel and the jitsi room and hang out a bit and ask some more questions I expect that we will not have time to answer all of your questions today so um yeah just please join us in the GT channel afterwards we are using jitsi to stream this it’s an open source video conferencing tool and it usually works quite well sometimes it doesn’t but most of the time it does so if there are any glitches please forgive us it is basically permissionless without collecting your data so we thought this would be a pretty good option to do this if you like this give a thumbs up subscribe to channel so we can do this another time and I think that’s about it already introduce the speakers so here’s Christian and the topic of his talk is back the F star CK up thank you for coming Christian yeah thank you so much for having me I hope people will understand or sort of excuse my my swear word in the title but it sort of fit in quite nicely with me with my feelings about backups or the lack thereof in Lightning so let me quickly pull up the presentation I’ve prepared so that should be live is it Jeff it’s working all good okay perfect so yeah the title today is basically back the fuck up how to backup and secure your lightning node in a way that you cannot end up losing money by operating your node like Jeff said I’m Christian Decker I work for Blockstream hence the small logo up there and I work mainly on the selecting employment implementation of the lightning protocol on which I also help out a bit and you might notice that later on because I tend to sort of emphasize what see lightning does correctly but more on that later so first of all why do we want to back up in the first place and I think everybody will be shouting right now that we’re actually handling money and so the rationale for having a backup of what is handling our money is pretty fundamental the way we do backups in in unchain systems like your ledger wallet or your Bitcoin core notes or all matter MNR off of systems where you perform on chain payments is basically just write down 24 words and if you ever destroy or or lose your your device you can basically recover from those 24 words which basically just seed a random number generator which is then used to to generate addresses so this is this is quite a common thing in the unchain world where basically you just have your 24 words you write them down on paper you send half of them to your parents or you engrave them on a metal all in the name of making this as disaster resilient as possible and ensuring that you can actually recover so that’s quite an easy option we can do that in in in lighting as well however it turns out that backups in lightning are quite a bit more harder and they can even be dangerous and to see why they are dangerous let’s quickly go through how not to backup a lightning node so if you’re anything like me you probably end up with with a couple of backups like this where we have sort of the the original dot lightning folder and then at some point we decided yeah we might want to take a backup of that so we renamed that directory or copy it over to something with an extension so let’s call it orange or and then we do it do some more changes and then we perform another backup and then we call it v1 but where the computer scientist so we start by counting counting at zero and so even our naming scheme becomes very inconsistent over time but that’s not the whole issue the whole issue is that even if we’re able to take backups of our liking node the act of restoring it can put some of our of our funds in danger simply because of the mechanism that we use in the light network to update state and to see that we basically need to look at what the update mechanism in lightning does so in lighting we use a mechanism that is called a penalty based mechanism in which each of these square boxes here represents a state so initially we have the initial state where the person a has 10 bitcoins and the and the other participant in the channel is 0 then we transfer one Bitcoin over and suddenly we have two states that are active and so what we do is by once we have agreed on this state we poison the old state so if any participant ever ends up publishing this transaction representing our state this this poison would be activated and the misbehaving node that has misbehaved by publishing an old state would lose all of its funds and so we do that a couple of times transferring one to two from A to B then we go we transfer form or from A to B we then poison the old state and this state is currently active and then we perform another update and this is a bit special in that we don’t actually transfer money but these state transitions can also occur because for example we added we we changed the feed that we need to pay on chain therefore even non user triggered actions can result in a state change that would would end up agreeing on a new state and poisoning the old state and finally we take this this last state and transfer from B to a one more one more Bitcoin and so this final state which does not have the biohazard symbol it’s basically the only state that can be enacted without incurring a penalty and all of the prior states are poisoned so by poison I mean the counterparty always has a matching reaction that could use to be partnered to punish if I were to publish this old this old transaction and so if we were to take a backup anywhere here we might end up losing some information and so the problem is exactly that this is a copy of the prayer of the previous image basically we both a and B agreed on this state being the last state however because I took a backup of this state and then had to restore this state for me it appears that this is actually the the last valid state and I should be able to to send that to send that state on chain and basically get my five bitcoins however that would be that would be cheating from the point of view of my counterparty because they actually have a matching poison or at matching penalty transaction for this state and so if I were to trying to enact this state then I would lose my my whole stake in this in this channel so not only is backing up and restoring lightning note rather hard but it’s also quite dangerous because if you end up with restoring a state that wasn’t the last one you might accidentally end up cheating and so over that over the the time we the lighting developers came up with a range of possible solutions to mitigate the situation or even prevent it altogether and sadly none of these is complete or gives you a 100% protection but we’re getting closer yeah so what I’d like to do is basically step through the various mechanisms that have been used prior previously and what we are working towards right now when it comes to securing the data you have in the channel and so the first one is something that Eclair did right at the start when we start rolling out these these lightning nodes is that a clear we’re assigned the developers of Eclair we’re getting some feedback that it’s hard to that that people were losing their devices with the Lightning node on them and that they were unable to recover the funds on those on those phones I mean it’s quite natural if you have a if you have a phone phones have a tendency to drop into water or break or just get lost and if you have money on those on the phone that’s that’s doubly sad so what the Eclair team did was they they added a trivial backup mechanism that would just take the state of the light you note after each change and would push that to a Google Drive folder from where they could basically restore the entire state now I’m not too familiar with how this this was done where that was done incrementally or whether it was done in a batch however this mechanism seems not to be working anymore simply because people soon figured out that if I start my Eclair wallet on a phone a and I then restore on phone B hey I can run this node from both phones at the same time and they share data with each other and basically the perfect thing but like I like I said before is that if you end up restoring a node that is not that is not the latest state you end up cheating that same thing happens if you have two nodes running against the same data set pretending to have the same channels and therefore contradicting each other while they proceed in in the lighting protocol and this my maid then end up again looking like a cheat attempt and the notes being punished so this first attempt was was was mainly used for recovery but people abused it a bit for to to share a single note across multiple devices so don’t do this a better solution is are the static channel backups that are implemented in L and E as you can see on the left side this is basically the structure that a single channel static backup looks like it has a whole bunch of information that relates to how the channel was created who did who the counterparty is and all of the information that is basically static in the channel and doesn’t change over time and so what this allows you to do is basically to start the the lnd node back up again with this backup file and restore what they call a channel shell not a true channel but enough information about the channel so that you can you can reconnect the year to your peer and ask for information that is relevant for you to get your your funds back now this is this is rather neat because it allows you to basically take a chai static channel backup store it somewhere offline it’s probably not small enough that you can write it on a piece of paper and write it back in what if you know when you want to restore it but you couldn’t store it on a on a USB stick or you can store it on a file server somewhere so that that alone should should allow you to at least get your funds back so it’s minimal in that sense it doesn’t change while you are while you’re renegotiating the the state of the channel itself and so for each channel you take one backup and you’re safe for for until you open the next channel basically the downside is that it requires you to actually be able to contact the counterparty node once you once you attempt to recover simply because this information that we that we track in the lightning protocol cannot be recomputed solely from this structure but you basically need to reach out to your peer and ask it hey what what’s the current state please please give me all of the information to to recognize and to retrieve my funds after closure oh and please go and close the channel so this basically just reconnects and tell it it tells it to to close the channel the downside is of course that relying on the pier to actually do something for you might get you into a weird situation because you’re basically telling the pier hey I lost my data can you please help me recover my funds and the counterparty could basically not give you that information or it can refuse to actually close the channel on your behalf which then results in them holding your phone’s hostage and so all of the current implementations implement this correctly and they will collaborate with lnd nodes to recover their funds but a malicious actor could of course change the code so that lnd nodes cannot cannot close successfully or recover successfully so while this is not this is not a perfect backup it’s it’s an excellent emergency recovery mechanism to clawback as much of your initial funds as possible so the third mechanism that we have and this goes more into into what see lightning is viewing as the correct version of doing this is the the ability for plugins that you can attach to to see lightning to basically keep a synchronous database log in the background on whatever medium you you’re you want to do and the the reason why we made this into a plugin is because we don’t we don’t want to tell you how to manage these backups and we want you to be to have the ability of of incorporating them into your own environment and so this basically just tells the plugins every database transaction that would result in a modification of the state before we actually commit to this transaction so it’s it’s a it’s right ahead log of the database transactions that that we do that we track before committing to changes and so depending on your infrastructure you might you might want to have an append-only log you can have compressible logs or you can I even have a replica of the original database being being managed concurrently to the to the main database and if you do that then you basically can failover immediately if your node dies so what you what you do is basically have a replica that that tracks changes along the main database on a network mount and if your node dies you can spin up another node and and connect to your to the replicas and you can just continue where you left off and so we don’t have we don’t rely on external help we don’t need to interact with our peer which might not even be available anymore but the downside is of course these are synchronous backups that that require us to write to the backup every time that we have that we have a state change and so this is this is one mechanism the other mechanism that see lightning has is the database replication and failover we’ve abstracted the the nice interface and chat in such a way that we can just take the take see lightning and have it talk to a Postgres database for example whereas the default database is the sequel I’d three database now Postgres does require a bit more setup but it allows you to also have a synchronous replication of your node and even transparent failover from one database instance to the next one so this is something that that enterprise users are very much interested in because they they can rely on what they already know about replicating and mirroring and failover on Postgres databases which is very we’re very well known in the wider software engineering world whereas replicating and scaling a single no lightning node that’s very niche knowledge so we can basically reconnect to what what institutional knowledge there already is and and people feel pretty comfortable with the with the way that databases can get replicated and secured so I mentioned this is this is more of an enterprise setup it also comes with a bit of of upfront investment costs who actually set up all of these now it’s set up the database and have replication set up but Gabriela Dominique Heaney has written an excellent guide on how to set up Postgres in combination with C lightning and I can I can only recommend reading that and and seeing if you can get it working as well the upside in this is obviously that basically restore is immediate because we have replicas that that can that have the the up-to-date information and so if the node dies we just spin up a new node and have it connect to the database if the database dies we just switch over to to a new master database and the the note doesn’t even learn anything about the database having failed and so this is another form of synchronous replication and of course this wouldn’t be a talk by me if I weren’t to somehow get l2 into the picture l2 is a paper that we’ve written two years ago now about an alternative update mechanism in which we can have in which we don’t penalize people anymore for publishing an old state instead what we do is we override the effects that your your misbehavior would have and enact a state that that we actually had so what we what we have here is basically we have the set up transaction that creates shared account shared address between the blue user and the yellow the green user and we create a refund transaction that gives the blue user oh I have a small this currency here but it would basically give the original owner back the the funds after a certain time mod expires and so the way we perform updates is by having an update one that takes these ten bitcoins here Moo ratchets them forward and attaches this the settlement one here in which both parties have 5 bitcoins each after this timeout expires and then we update again by a tad by ratcheting this these funds forward and attach this settlement to with the newer state this this time the green user send one Bitcoin over to the blue user and this update to basically overrides whatever would have happened in settlement 1 and so instead of punishing the the counterparty for behaving what we do is we override the effects they would have had so let’s say in this case the green user would like to enact this settlement so it’s it sends out update one and now has to wait for for this timeout to expire in the meantime the blue user publishes update to overriding this settlement one and starting the timer and settlement two so we no longer punish our counterparty for for misbehaving but we override the effect and get what get to a state that was agreed upon in the end and so if I were to backup and restore and basically restore the at the time that update one and settlement one where where the last state then all that would happen is basically that my counterparty would say would send out update two and basically correct my mistake instead of hitting my hitting me in my face by stealing all my funds and so this is sort of this is something that that’ll that will allow us to have a much more robust mechanism of backups and restore and I think ultimately this will this might become the way that that we end up doing stable notes sometime in the future now many people have argued that this loses the penalties however as far as I can see the penalty mechanism is intrinsically linked with our inability to actually create backups and restore because whenever we backup and restore we might end up in a situation where we can be penalized and so I think we might want to distance ourselves from this satisfaction of punishing people and more and go to a more a collaborative solution which in this case would be l/2 okay now I’ve just given you five different back up standards but how about creating a unified one everything everybody seems to be pulling in a different direction and everybody’s experimenting with different different ways of backing up and restoring wouldn’t it be nice to sort of have a backup that can be can be created by sea lightning and then later imported into lnd and sort of restore from from that and the answer is of course that that would be really nice if in the end we have this convergence of different different ideas and different implementations where we actually agree on a unified standard for these kinds of the operations however currently at least the internal structure of the data that we track is not in a form that we can have that we can share these backups between implementations maybe eventually we will get there but currently that’s not the case and I would actually argue that this experiment experimental phase in which we are currently is is good because it allows us to experiment with different trade-offs and come up with better ideas and in the long run that are then capable of getting a majority of the ecosystem behind it and actually make this a better standard overall because that’s always been the way that that the Lightning development has worked we’ve always gone off into different direction and then reemerged back and a shared experiences from the way that that’s some of our approach has worked and other student and so my hope is there that we will end up with a unified standard so I have one more slide which is basically the resources where you can find all of these informations starting with the cloud backup by isang the excellent article on recovering fronts from LNG using static channel backups and we also have a couple of of backup plugins and documentation on the DB right hook and of course how you can wire see lighting to run with a PostgreSQL database and of course Gabriele is excellent tutorial on on how to set up replication with PostgreSQL so that you can have the real enterprise feeling of for your node and yeah read the l2 paper I think it’s kind of neat but of course I’m biased so that’s it from my side and I’ll quickly switch over to see all of your faces again so I can I can actually get some human touch back hello hello human well I mean it’s a human but there’s really doubt there that we are in human sorry sorry for that inaccuracy what this Peter proper time to ask you to take off your human mask but thank you Christian for a talk and I’m assuming can we because right now it just showed the titles and can you like send us the links later so we can put it in the video description yes I do have a PDF version of this and oh we can just add that later on yep great so he read it before anybody has any questions now is a good time to ask is either on YouTube or the meta most channel mm dot full motor torque as I said earlier Michael has already collected some questions from Twitter so I guess Michael will jump into it right now I was itching to reply right there but the prize for a live stream so let’s let’s start with a few questions on the content in the presentation and then we’ll go over to the questions that we got from Switzer’s anything there are any Twitter questions that relates to backups so on backups you I think you alluded to some of the trade-offs there like ease-of-use there’s trusts in terms of requesting certain information from your remote party the security there’s cost you know are there any other trade-offs and which of those align towards a technical user and non-technical user in an enterprise so one very important one that that I that I missed to mention before is is that of course these backups come with with some latency so you’re you’re know depending on which backup you choose you might end up sacrificing some of the throughput of your node simply by having to reach out to to a backup plug-in or to a remote storage or some some other operations that you need to do need to do in order to make this safe the only option that doesn’t incur this cost is our the static channel backups because they are basically done once at the beginning of of your node being set up and then well then every one every time you open a channel but there is no additional sort of latency that cost that you incur at each update whereas the full backup solutions like pushing onto a cloud server or having some plugin that that will obviously incur a cost at the time throughput cost so I think definitely the the security static channel backups are probably the easiest ones to set up we’re trying to make it as easy as possible for the plugins as well by we do have a plug-in repository that is community driven and so we do have people working on Dropbox backends where basically each single update which gets written to Dropbox and then we have compactions that end up working in the background so eventually it should be it should end up just being a plugin that you drop into a folder and then everything is taken care off from from there on but yes currently there is a bit of of work that you need to invest you actually get these working and of course the extreme example is is the replicated database back-end which really takes some digging to get right yeah not for your faint-hearted I guess but we’re getting there cool and so and so I imagine if either the L&D sets up or another implementation comes up with a backup strategy that takes those trade-offs in a different way but it’s digital potentially popular there could be a see lightening plug-in that just replicates whatever whatever backup oh yeah that that’s that’s how our development works as well and and I I wanted to to sort of point that out in the second to last slide which is basically we we do learn from what other teams do and we do bring some of these innovations back to the specification to be discussed and if if there is a clear winner when it comes to usability and and other trade-offs then there is absolutely no no hurdle from actually having that you become part of the specification itself at which point it sort of becomes available to all users to all implementations cool and and again you alluded to this but what exactly does need to be backed up Chris Stuart did a very good presentation at lightning conference last year on the six different private keys that have the that you need to think about with lightning some needs to be hot some needs to be current what exactly needs to be backed up and in these different scenarios behind the keys so obviously the first one is basically the the seed key that that you use to generate all of your addresses so that’s pretty much identical to on chain it’s what what creates all of all of these addresses that that are used throughout the lifetime of your funds when they enter or leave a channel they will end up on addresses that are generated from that so that’s that’s the that’s the central piece of private information that that you need to backup and then we have we have a variety of parts where keys are used that are not directly derived from from this seed key namely whenever you and your peer with which you have a channel open need to generate a shared address that will not be part of your derivation tree that is rooted in this in the seed key and so those keys need to be a need to be backed up along the way and this includes this includes hdl-c secrets so whenever a payment goes through your channel you better remember what the what the key was that this payment was contingent on the the address the indices of the database of the channel in the database because that’s the index of the derivation path that is going to be used and my favorite key in all of this oh and revocation secrets those are important because that’s basically the poison we use to keep each other honest and of course my absolute favorite is the per commitment points which is an ECDSA point that is added to your direct output if I close the channel and so without that point you wouldn’t even recognize that these funds are supposed to go to your node and yeah so that’s that’s a really weird use of of these point weeks because they are basically in there to make sure that each state commitment is unique but it just makes it so hard to create a knot chain wallet for lightning yeah when when I stumbled over those and how to implement the the wallet I just swore yeah it’s challenging um what about watchtowers is there a role for watchtowers with backups they were predominantly there so that you don’t have to be on line the whole time and so they can stop someone from cheating you but is there a role for them to store some state on your behalf like do you see the role of a watchtower expanding and different services being provided around in watertown yeah absolutely I mean the the role of a watchtower is definitely not clear-cut as such I usually tend to think of what Shar as solely as a third party that that I that I put in charge of reacting when something nefarious happens on unchain but of course this this definition can be expanded to also include arbitrary data storage on behalf of the node in exchange maybe for a small fee so I I wouldn’t say that there is a clear-cut boundary between a watchtower and backup service simply I guess the name sort of implies which is the primary use for watchtowers that’s watching your channels you know in order to react if your counterparty it dies or does something nefarious whereas backups are basically there to to store your previous or your very database changes so that you can recover the state later on so if we add if we add a way for notes retrieve data from a watchtower it basically becomes a backup server as well so little-known secret is that the that the punishment transactions that we send over to a to a watchtower are actually encrypted and so the encrypted data could basically be anything it could be your backup so if you if you ever end up what needing an arbitrary data storage just pay a watchtower to store it for you and hope they give it back to you if you need it yeah yes actually and regarding watchtowers I’m not quite sure if you just implicitly answered it but are you planning on implementing watchtowers in C lightning or are there like any plans oh absolutely I mean for a long time we’ve we’ve seen see lightning more of a more of a hosted solution where you run your node at home or you run you run it on some data center and you then remotely connect to it so our nodes would be online 24⁄7 but there has been quite a quite a bit of demand for watchtowers for a functionality as well so what I did in the last release no it’s not Ellen it’s not yet released but there is a hook for plugins that basically pushed the penalty transaction for each of the prior States to a plugin and the hook basically is just a mechanism for us to tell the plug-in some piece of information in a synchronous manner so that the plug-in can then take this information and move it off to somewhere else you’re watched our back-end for example before we continue with with the with the process of the lighting channel itself and so that way we make sure yes the watch char has the information that needs to be stored before we can make progress on the channel itself and so by pushing that into a plugin we can we can enable our watched our producers to basically create their own connection our protocols to to talk to your watchtowers and we don’t have to sort of meld that into the light email itself so it’s very much on our roadmap now and and I think I have three variants of the of the watchtower who can it should hopefully be in there in the next release great thanks and then with them with end you’d like end users or with private channels on the edge of the network versus routing nodes in the middle do you think the backup strategy or backup setup will be different for those two participants on the network yeah so I do think that as the as the routing nodes inside of the network start to professionalize they will probably start moving up there or professionalizing there their data back-end as well and so I first see mostly businesses and writing notes starting to use more and more the the replicated database backends and for end-users I do think that the backup plugins provide provide a very nice trade-off between between having the functionality of a backup but not having to have the the big setup or upfront cost of setting up a replicated database and automated failover and all of that stuff and so that’s basically the two rails that we’ve chosen and and that sort of covered these two use cases right professional node operators and your geeky geeky song that sets up a node for the rest of the family basically because you know if you are that if you are that end user I’ll be on the edge of the network you don’t have to worry so much about going down at the time that you’re browsing your payment because that’s the big challenge here isn’t it yeah yeah you need you need to have that latest state and so if if there’s some bizarre timing that that’s when it becomes really technically difficult yeah so I’ve done I’ve recently done a restore of one of my of my notes that actually does quite a bit of probing in the network and I think it had like a week a weeks back log off of changes and it recovered in seven minutes seven or eight minutes so that might be too much for for a professional operator but for me sitting at home doing my grocery shopping or not buying anything on Amazon seven minutes of downtime while I restore my no it is perfectly fine but even then we do have trade-offs I mean we can have we can have backup plugins that also replicate the database in the background and then it takes just a couple of seconds to failover okay any more questions on backup stuff or should we go on to the many questions on all kinds of lightning and see lining stuffing well there seem to be a couple of database specific questions by the thing that’s something we can take like in the after chat yeah I’ll definitely hang around as well and answer a couple of them okay so it’s going to some of these Twitter questions so me mentioned l2 there one of the questions since it was Wednesday afternoon so obviously obviously don’t know when but is there any chance of getting it into the next soft walk with SAP route or as that ship sailed and if that ship has sailed like what what is viable for getting it into Bitcoin if we ever get in I mean it would be on would be awesome if we could get the cash no input or esacash any prep out languages which is a J’s proposal into into the software that is rolling out a protein or the problem being that all of the reviewers currently are focused mainly on on taproot and it’s really hard to to grab people and and sort of get them to discuss yet another proposal that that might that we might want to have in in the same software so while I think it is still possible I am I’m pretty sure that that it will be a secondary maybe lighter soft work at a later point maybe bundled with some more lightweight proposals which require less less changes to the structure itself and so it’s easier to review as well and sort of see what what kind of parts of the code base is being touched and what the what the side effects are because taproot and snore are kind of big change all at once that being said AJ has taken taken sick ash no input and with his anyway any provide he’s he’s formulated out all of the details that need to need to change in Sahaj no input for it to nicely mesh with which snore and taproot and so with with his proposal we could we couldn’t end up with a with a very nice bundle of changes that can be deployed at a later point in time independently off of taproot and say cache annoyed and snore so I remain optimistic that we can get it in but I can’t promise anything and then do you have any thoughts on whether planning for the next of four can really happen seriously until we have what is likely to be no one knows what it’s likely to be the general taproot so forth first it can be a parallel process happen or do you really need to get all of those core reviewers and core brains on the once off before we even get serious about any other soft work I think it’s probably best if we if we keep the described distractions as low as possible while we do an important update like we do like tap retention or is and so I I personally also don’t feel comfortable creating more noise by by trying to push hard on sake as no input i I think snore and tap root will will get us many many nice features and I think that we shouldn’t we shouldn’t hold up the the process by by trying to push in more features while we’re at it because then it’s sort of everybody wants their favorite feature in and I don’t see this particular feature being being so life-changing that it that it should jump the queue basically I would like to have it in because I think l2 is a really really nice proposal yeah self congratulating again but so hopefully we will we will get it at some point but I don’t I don’t think we need to stop the entire machinery just for this one proposed and who knows we might come up with better solutions and I’m under no impression that this proposal is perfect so the more time we can spend on improving and analyzing the better to be a better the outcome I guess in the end cool and then final one on snort taproot have you been following the conversation on activation do you have any thoughts how is there any way to speed up the process so is this just going to be a long drawn-out discussion activation is is a topic that many many people like to talk about and and it’s a very crowded space and I have a tendency to stay out of these kinds of discussions where you already have too many cooks in the kitchen I think whatever the activation mechanism is it’s going to be perfectly fine if it works and yeah don’t don’t really have a preference there I would say okay very well okay let’s go on to another question so sorry the sick-out notebook question was from Fraga with Fraga law couple ups there’s the Frog on Twitter first process okay so another question from Sergey is for the researchers and developers in the space where should one put their efforts for the maximum positive impact on Island Lightning Network so obviously you do a lot of stuff like you’re a researcher you contribute the papers you contributed see lightning you do a lot of work on the protocol and the bulb specifications and this kind of stuff how do you prioritize your time do you follow the pizza we’ll work on whatever the most fun or gee shrines prioritized but the things he think are most important I definitely definitely choose topics that interest you the most I mean I I did my PhD on this and and it was it was amazing because he could actually jump around the whole space and it was basically just a big green field where he could start building stuff and it was all novel and new and and I get the feeling that lightning is is the same way you can you all of the sudden you have this you have this huge playground and and which you can explore and and don’t don’t limit yourself to something that that might that might be profitable or the cool thing do what I would do whatever interests you and and for me personally I enjoy breaking stuff and so I I will always try to find out things that that can be abused and and sort of see if I if I can get away with it in sort of limits but I mean lightning is lightning is cool too to just explore and see what what you can do so if you want to secure if you want to increase security through watchtowers that’s a good good thing if you want to increase privacy by by sort of trying to figure out ways to to break privacy that’s that’s another very good thing and so what whatever whatever you like the most is great but I’ll continue the Twitter questions Jeff you have to that is there is everything on the YouTube that you want to know RM so if you have any questions drop them now it’s a good time I think the question you just continued I was just curious like I think it was about like efficiency sort of like what like what point like where’s the where’s the best impact you can make I think was supposedly kind of intention of the question and it’s always something I think think that’s worth considering because we only have limited time and resources right so how can you like despite what interests you like how do you think one particular person can may have the biggest impact what a more general note I guess yeah I I do think that that that there are quite a few statements out there that need to be proven or a disproven and an in particular one one thing that that I like doing is attacking the privacy of the network of course we we now have completely different different trade-offs when it comes to privacy from unchain to off chain we don’t leave leave eternal traces of our actions like we do on a blockchain we don’t we don’t leave those traces enlightening but we do talk to peers what might these peers in fear for infer from our actions and what what information could they extract from from that so there’s there’s definitely the the privacy aspect that that is worth looking at then there is then there is network formation games if if you’ve ever done game theory finding a good way to create resilient networks but also networks that are efficient is a huge question so how can we create a network where each node individually takes take some decision and we don’t end up with one big node in the middle sort of becoming the linchpin the single point of failure that if that one goes down everybody starts crying so that’s definitely that’s definitely an open research question and also more fundamental stuff like how can we how can we improve the protocol itself to be more efficient how can we gossip better how can we create an update make that that is more efficient that is quicker that needs less round trips we do have we do have one engineer rusty who is in Australia and he is always the guy who will try to get that half round trip shaved off of your protocol and so that’s that’s that’s his speciality and and so he he has a very much focus on that but there I wouldn’t say that there is an official list of priorities for the the Lightning Network and and associated research it’s it’s basically you making making your own priorities though people tend to congregate on certain tracks basically right thank you cool so another question from Cedars akan Switzer I’d like to ask your thoughts on the privacy papers I believe you’re an author of at least co-author of at least one maybe could you give a high-level summary and office of those conclusions and yeah thoughts no I mean II started collecting collecting data about the lightning yet were at the moment we started and so I do have a bit of backlog on the evolution of of the Lightning Network and so I’ve been approached by researchers over time to - who want to analyze this this data how did the network grow what the structure is what the success probabilities are and and so that’s that’s how I usually end up in these in these research papers and so far all of the analyses I think are very much on point so purely the the analysis of centrality in the network and and the upsides and downsides the efficiency that we get through more centralized networks the the the resilience we get from distributing the network more all of these are are pretty nicely laid out in these papers now in some papers there is is an attempt to extrapolate from from that information and that usually is not something that I encourage because the these extrapolations are based on the sort of the bootstrapping phase of the lighting network and so it’s not it’s not clear that these patterns and these behaviors will continue to exist when going forward and it’s mostly these extrapolations that people have people jump on when they say oh the Lightning Network is is getting increasingly more centralized and will continue to be easier to become so and so that’s that’s something that I don’t like too much the the other one is that that people usually fail to see that the decentralization in the Lightning Network is fundamentally different from the decentralization in the Bitcoin network so in the in the Bitcoin network we basically have a broadcast medium where everybody just exchanges transactions and by glancing at when I learn information about which which transaction I can basically infer who the original sender is and so on and so forth enlightening the the centralization of a network is not is not so in the see decentralization of the Lightning Network is not so important because we do have quite robust mechanisms of preserving privacy even though we are now involving others in in our transactions so we do have Onion Routing we do have timing countermeasures we do have we do have mechanisms to add what it’s what’s called shadow routes which are basically just appendices to the actual route pretending we send for further than than our actual destination we first the amount so that they you we the amounts are never round there’s never a five dollar amount in in Bitcoin being transferred exactly over the Lightning Network we’ve recently added multi-part payments where payments of a certain size or star are split and so unless you can correlate all of the parts you will not even learn the the exact amount being transferred and all of these all of these things are are there to make these the network more privacy-preserving now is it perfect of course not but in order for us to improve the situation we first have to learn about what’s working and what’s not working and so that’s that’s my motivation behind all of these research papers is basically to see do our mitigations have an effect if yes how how much more can we improve them or should we just drop our mitigation altogether because it might have a cost associated with it and so while I while I think we are doing good I think we could do better and that’s why we do these researches and we do need to talk publicly about the trade-offs of these systems because if we promise a perfect system to everybody that’s a promise that we we are we’ve already broken right and so being up front with the upsides but also with the downsides of of a system is important I think and then kind of going back to the backup discussion where we are talking about hobbyists and enterprises do you think it’s important that there is a there is a route for a hobbyists to set up around routing night and that there are a lot of hobbyists running routed notes just so that people don’t have to go through the enterprises or maybe we just need a lot more Enterprise hobby has become an enterprise once he professionalized anything so they’re always that there must always be the the the option for users to professionalize and become more proficient and more professional in in their operations and because that’s that’s something that that we’ve gotten from Bitcoin it’s not it’s not that everybody must run anote it’s that we suddenly have a new currency but now suddenly everybody has the option of taking on their own responsibility and becoming their own custodian and not having to rely on other parties we shouldn’t we shouldn’t shame people into doing that but the option must be there for those interested and I do think there there is a wide spectrum of options that we can offer from yes I’ll say it custodial wallets up to having having fully self sovereign nodes that run the full software stack at home and you basically connect to your to your own realm in in Bitcoin and and of course is a spectrum and depending on your interest you will land somewhere in there we would like to have more people on the on the educated and the knowledgeable side of things where they run as much as possible but if if you’re somebody who just wants to accept the payment without having to read months a month into how Bitcoin works and how lighting works and have to understand everything I think there should be an option for that as well but the important part is that it must be an option we shouldn’t we shouldn’t be forcing people in one direction or another my sons I’ll continue with the Twitter questions Jeff please feel free to interrupt me at any stage if you have questions from teaching but I’ll continue but which one’s almost you know I’ll jump into this nothing so far so if you have any questions drop them in the YouTube comments please okay so there are a couple of questions on real-time database replication Antoine said it’s real-time database replication already in he just merged his plugin some weeks allowing backend customization and Fiat Jeff also asks can we get comments on real-time replicas made with post crests mm-hmm so the the replication of a Postgres database already set up off of the Postgres database is is sort of something that we as a silly thing do not do not include in our code because it’s basically just just say back-end that you write to so it’s it’s basically up to the red to the operator to set up a replicated set of Postgres servers and and then just pointing see lightning towards it and that’s definitely in there I think it’s been in there since 0 7 2 and so that’s almost a year now and and so all you actually need to do is is add dash dash wallet equals Postgres and then the the the username password and URL of where your your database node lives and it will basically then start talking to Postgres instead of a sequel ID database on your local machine and again do me a Gabrielle Illumina Keaney has an excellent tutorial on that which I’ve linked in the in the presentation which I’ll link later on very cool okay another question from Fayette Jeff this is an interesting one some mini scripts and custom-made GRC types so should we think about first thing doesn’t mean that we arrange this ahead of time because if yet Jeff is one of our best contributors yeah if I write some of plug-ins for so so the second question was what should we think about custom channels or custom HC types is that possible feasible with integration of main script allow for such customization do you see mini script playing a role I don’t exactly see where mini script comes in it might be usable it might be useful when when it comes to actually having a fully flexible implementation where where we can have external applications deciding on what kind of outputs we add so one of the one of the proposals for example is once we have l2 we can have multiple channels where we can have any number of of participants basically and and the only thing that these multi-part channels decide is whether to add or remove or adjust the amounts on on individual outputs and the way we can describe those outputs could be in the form of mini script so that way each each participant in the in this in this ensemble of of l2 participants is is deciding on whether or not to add a an output to the to the state of the of the channel and they wouldn’t actually need to know the exact application that that is sitting behind it in order for them to decide on whether a an output makes sense or not because they would get a mini script descriptor but besides that if we go back to to the Lightning Network it absolutely makes sense to have custom protocols between between nodes and these can include custom channels or custom assets being transferred on those channels or even different constructions of HT LCS the only thing that that is important is that the two nodes that decide to use that a certain custom protocol agree on what this custom protocol is and that they implement it correctly and the second thing is that if we want to really want to maintain the cohesion of the Lightning Network and the ability to transfer funds from point A to point B we need to have a hdl-c construction that is compatible with the with with the preimage strict area that we’re currently using or the point the point contingency that that PT LCS would would bring and so that means that if I receive an incoming a DLC from left I can use whatever protocol I want to forward it to the right person if my right knows that protocol but we need to switch them back to two normal HCL sees once we’ve left this channel and so we can have mixes of of different forwarding mechanisms and different update mechanisms and custom channel constructions as long as from the outside it all looks it all looks compatible and so that’s that’s also one thing that that I pointed out in the l2 paper is that l2 is basically a drop-in replacement for for the update mechanism and lightning but since we don’t change the HDL C’s I could receive an HDL C over a lightning penalty from Geoff and I could forward it to you as an H you’ll see over l2 to you and so we can exchange individual parts as long as on the multi-hop part we do agree on on certain standards that are interoperable because there is there is a line right between me nuke on it between me and you if we just have a payment channels between me and you we can do anything right now yet it’s completely down so what are we gonna be just pretending to transfer I mean I could if we trust each other and and I am sort of we we have the same operator and we don’t we can settle out outside of the network Jeff could send me an HPLC I could just tell you using an HTTP request and tell you hey it’s it’s okay just just forward this payment to wherever it needs to go we’ll settle with a beer later so even even these constructions are possible where where we don’t even have to have a channel open between us to forward a payment if we trust each other or we could change the transport mechanism and and transfer the Lightning packets over SMTP or we could transfer them over ham radio like some people have done and so we can we can take parts of this of the stack of protocols and replace them with with other parts that that resemble or are have the same functionality but have different trade-offs but then if me and you have a channel that’s one hop in the middle of the route from A to B and we have a channel in the middle somewhere but there are some restrictions we have to follow bolts we have to be bold compatible etc if we’re going to be routing Onion Router payments yes so there is one restriction which is which basically is that we will only accept channel announcements if they correspond to a to an out point on the blockchain and this is done as an anti-spam measure so we would have to we would have to have to create something that looks like a lightning Channel if we were to pretend that there is a channel in the end but other than that you can basically do whatever and then just to wrap up this VHF question um so that you don’t think is a role for many scripts in terms of like custom scripts or making changes to those standardized lightening scripts that most of us would be using oh there might actually there might absolutely be a point to it but mini script is is sort of a mechanism where we can express something that some some structure of or some output that that we haven’t agreed upon ahead of time right so I could I could tell you hey please add this output here and we wouldn’t have to I wouldn’t have to tell you before we start the channel how this needs to look like I can just send you a mini script so mini script is a bit of a tool that allows us to talk about scripts but currently in the lighting protocol all of the scripts are defined in the in the specification itself so for us there is currently no need to talk about potentially different structures of outputs but there might there might be a use where we can add this flexibility and decide on the fly what what a certain output should look like there’s just never been this need to Mehta talk about about outputs because we already know what this looks like so I think that leads very nicely onto the next question from that of Cohen who’s been doing a lot of work on PLC’s so so instead of miniskirts and scripts perhaps we’re using script bus groups and PT LC is no this question is what are the biggest of schools pain points to implementing protocol changes in a lightning mode eg if we wanted stuck with payments or ptsds yeah it varies a lot so depending on where this change is being made it might require changes to to just a single line of code which is always a nice thing because you get a change log entry for ten seconds of work and some others require really reworking the entire state machine of of a lightning channel and those would those often require months of work if if not longer now for for PT LCS it’s it’s it would not be too hard it basically just changes the way we we grab the the preimage on the downstream edge you’ll see and then hand it over to the upstream edge you’ll see it would basically just now be an additional computational step in there instead of taking the preimage here and applying it here it would basically take a pre-image here or a signature here modified slightly and that gives you whatever is needed to be plugged in on the other side when it comes to more - more - deeper changes to the to the state machine we for example have the anchor outputs proposal currently which which would require us to rework quite a lot of our Unchained transaction handling and so we we we try to be very slow and deliberate about which changes which changes of this caliber we add to the specification because they usually bring in bring a lot of work with them and of course the extreme case of all of this is is if we were to add L two that would basically be reworking the entire state machine of the entire protocol and that’s that’s a major work so yes L 2 has its downsides too so in philadelphus I know he’s been working on PLC’s and I think he implements it with Jonas Nick and a few others say it’s with ECDSA so do you think do you think there should be a lot more work on things like channel factories before L 2 because l 2 is probably going to be a while I’m a bit hesitant when it comes to channel factories because I depending on what day it is they’re either brilliant or they’re just stupid and being one of the authors of that paper I don’t know the the the main problem with channel factories is that we require multi- party channels first because what channel factories do is basically they take they take some shared funds that are managed collaboratively between a number of participants and they then take part of that and move it into a into a separate sub channel in in that construction and that has the the advantage that since since we are no longer the entire group that needs to sign off on changes but just the two of us that now need to basically agree on what happens to these funds in the sub channel it’s way quicker to just collect two signatures rather than fifteen that’s that’s basically the main upside the downside of course is that first we need to have this group of fifteen and the way we implement this this group of fifteen the multi-party channel seam needs to either be a duplex market payment channel which is a very old paper of mine which never really took off because it’s blockchain footprint is kind of large or we use l2 which which allows us to set up these very lightweight 15 or 15 or 60 or 60 channels and then we can channel factories on top tear for efficiency the reason I’m saying that depending on the day the channel factories sound kind of weird is that where we already have an off Jane construction where we can immediately sign off on changes without having to have yet another level of indirection but then there is the efficiency gain and so yeah still on decided I don’t know okay and let’s just wrap up no drives question what can we do to encourage better modularity is this important considering a approaching taproot world I don’t know exactly which kind of modularity is referring to I guess modularity in the implementations but maybe maybe I’m misunderstanding yeah I think I think the modularity of the protocol and the modularity of the implementations pretty much go hand in hand simply because the if the specification is very nice modular boundaries where you can where you have separation of concerns one thing is manages updates of states and one thing how manages basically how we communicate with the blockchain and one thing manages how we do multi-hop security and that’s that automatically leads to a to a structure which is very modular the downside the issue that we currently have is that the lightning penalty mechanism namely the fact that whatever output we create in our state must be penalize Abell makes it so that basically did this update mechanism leaks into the entire rest of the network of the protocol stack so I showed I showed before how how basically we punished the commitment transaction if I were ever to to come to publish an old commitment but if we had an hdl-c attached to that this hdl-c to would have to have the facility for me to punish you if you were if you publish this this old state with this hdl-c that had been resolved correctly or incorrectly or timed out and so there’s this it’s it’s really hard in the in the penalty mechanism to have a clear-cut separation between between the update mechanism and the multi-hop mechanism and whatever else we build on top of the update mechanism because it just leaks into each other and so that’s that’s something that that I really like about l2 is is that we have this clear separation of this is the update mechanism and this is the multi hoc mechanism and there is no no sort of interference between between the two of them and so I think by by clearing up the protocol stack we will end up also with with cleaner separations with more modular implementations and of course a/c lighting we also try to expose as much as much as possible from the internals to plugins so that plugins are first-class citizens in the in the lighting nodes themselves and so they can they have the same power that most of our most of our pre shipped tools have one little known fact for example is that the pay command which is used to pay a bolt 11 invoice is also implemented as a plugin and so the plug-in takes care of decoding the the invoice of of initiating a payment of retrying if a payment is failed of splitting a payment if it’s if it’s too large or adding a shadow route or adding fuzzing or all of this stuff is all implemented in a plugin and the the bare-bones implementation of c lighting is very very light and doesn’t come with a lot of bells and whistles but we make it to make it so that you have the power of customizing it and so on and so forth and there we do try to keep a modular aspect to see lightning despite the protocol not being and not being a perfectly modular system itself cool thank you and then a second question from Fiat Jeff there was a mailing list post from Yoast proposal for up front payments in us if if you did serviceable we might have got it then it seemed to have stopped why haven’t we seen more development on that direction in that you dropped out there temporarily could you repeat the question sure sorry so if a Jeff had a question on upfront payments and it was one of the issues with upfront payments there was discussion about it but then it seems to have stopped why we see more development in that direction yeah yes so upfront payments was or is a proposal that that came up when when we first started probing the network because probing basically involves sending a payment that can never ever terminate correctly but by looking at the error code that that we should receive back we learn a bit about the about the network and it’s for free because those payments never actually terminate that brought up the question of hey aren’t we using somebody else’s resources by creating a Chelsey’s with their funds as well but not paying them for this service and so the idea came up of having upfront payments which basically means that if I try to write a payment I will definitely leave a fee even if that payment fails and so that that’s kind of kind of neat but but the balance between between working and non-working is kind of hard to get right the main issue is that if we pay upfront for for them just receiving a payment and not forwarding a payment then they might be happy to just take the upfront fee and just fail without having any stake in in the system because if I if I were to receive an incoming edge you’ll see from from Jeff and I need to forward it to you Michael and Jeff is paying me ten milliliters for the privilege of talking to me I might not actually take my half a Bitcoin and sort of lock it up in Internet you’ll see to you but I might just be happy taking those ten militaries and say yeah I’m okay with this you try another road so it’s it’s more it’s it’s an issue of incentivizing good behavior versus versus incentivizing basically just abusing the system to maximize your outcome and so a mix of a mix of upfront payments and and payment and fees contingent on on the success of of the actual payment is probably the right way but we need to discuss a bit more and people’s time sort of tight when it comes to these proposals there’s just been too much movement I guess okay I think that’s all my turn to questions I’m gonna ask one of my own questions if that’s okay Jeff we need to wrap them I’ve time for one and then go move it into the private jitsi room okay very good so I’ve been reading a bit about simplicity which your colleagues that and back and Russell Akana have been working on a blog stream so they talked about being able to do new sneakers flight signal but any probe out without a soft walk so in a world where we had simplicity perhaps the arguments against no input or any provoke namely than being dangerous for users no longer apply if simplicity was was was in Bitcoin and people could just use it anyway any thoughts on that ah that’s that’s an interesting question how could I not get fired for for discussing this in public now III do think simplicity is is an awesome proposal it’s it’s something that that that I would I would love to have because so far during my PhD and during my work at block stream the the number one issue that we had was basically that stuff that that we want to do was blocked by them not being available in in Bitcoin itself and so it’s it can be frustrating at times to to come up with a really neat solution and and not being able to actually enact them as far as the criticism to see cash no input goes and see cash any any provide they ours we shouldn’t take them lightly and I and I do see points where where people bring up good points about about there being some uncertainty and there being certain insecurities when it comes to double spending when it comes to to securing funds and how do we clamp down this proposal as much as possible so that people can’t inadvertently abuse it but I do think that that with all of the existing systems in beer that we already have in Bitcoin we’re currently trying to to save a sandcastle while the dike is breaking behind us right so it sort of doesn’t it’s a disproportionate amount of caution when we already have some really dangerous tools in in the Bitcoin protocol itself first and foremost who who invented say cash none I mean you can you can have a signature that does not cover anything but what you can still spend funds with it so while I do take criticism seriously I don’t think that that we need to spend too much time on that and indeed if we if we if we get simplicity a lot more flexibility it could be added but of course with great power comes great responsibility and so we need we need to make sure that that people that want to use those features do know the trade-offs really well and and and don’t put user if a user funds at risk and that’s always been something that that we’ve that we’ve pointed towards and yeah that we we need to have tech savvy people doing these custom protocols otherwise you should you should stick with what’s tested and proven and obviously simplicity still far off and 20n bitcoin is I’m saying so I have just completed a simplicity based transaction on elements so we do have we do have a network that that where we test these experimental features sort of to just showcase that that these are possible and what possible implementations could look like and so that’s our testing ground basically and and Russell has used his simplicity implementation on some of these implement on some of these transactions all right cool ok quite some cool stuff in there like wines and stuff I know it’s okay so do we need to wrap up now Jeff um yeah I think we can come to an end it’s been a lot of questions and there’s like a couple more but I think we can take it to two jitsi one last one one lighter one Christmas Lisa to change her Twitter avatar because it’s so confusing I keep thinking Jack Dorsey is tweeting about see lightning do I look like Jack Dorsey no no Lisa oh oh yeah Lisa maybe you can ask her to change her she HD needs to she needs to change she changes quite often her so I just need to be I just need to be patient all right thank you couldn’t get your question in and there’s going there’s one great opportunity up coming and we’re going to have another lightning hex print next weekend and Christian already said he’s going to be there and all of rock storm is coming I think no I’m just kidding and I think our marketing people won’t be there no no but you said you you what to do like bring us some more people which is group ID you’re definitely gonna be there I’m posting the link right now and so we’re not gonna have a Potzblitz episode next week but we’re gonna have a lightning round table and wrapping up all the results of the Lightning Hacksprint and now I’m just gonna post the jitsi room to youtube so you can join and have asked questions some more questions Thank You Christian for coming and thank you Michael for coming thank you both for the work you’re doing and propelling Bitcoin further to the Moon and Mars and Beyond it’s a lot of fun yeah all right here’s the link and this was episode number four of Potzblitz see you in two weeks. Lightning Hacksprint next week. Bye bye! Thank you Jeff! Thank you Christian! .