This is a conversation with Tim Beiko and Danny Ryan - the lead coordinators for the Eth1 and Eth2 development efforts - about the future of the Ethereum protocol.
My guests today are Tim Beiko and Danny Ryan - the lead coordinators for the Eth1 and Eth2 development efforts.
In this conversation, we go deep on the future of the Ethereum protocol together. We talk about the Merge (the transition from Proof of Work to Proof of Stake via the Beacon Chain - which is the most substantial Ethereum network upgrade to date, and happening sooner than many people realize), the cryptoeconomics of PoS, MEV, staking derivatives, and how protocol development works in practice.
Into the Bytecode:
- Sina Habibian on X: https://twitter.com/sinahab
- Sina Habibian on Farcaster
- https://warpcast.com/sinahab
- Into the Bytecode: https://intothebytecode.com
Disclaimer: this podcast is for informational purposes only. It is not financial advice or a recommendation to buy or sell securities. The host and guests may hold positions in the projects discussed.
Sina [00:00:18]: Hey, everyone. Welcome to Into the Bytecode. My guests today are Tim Beiko and Danny Ryan. They're two of the smartest people in this space and are the lead coordinators for the Eth 1 and Eth 2 development efforts. In this conversation, we go deep into the future of the Ethereum Protocol. We talk about the merge, which is the most substantial Ethereum Network upgrade to date. It’s the transition from proof of work to proof of stake by the beacon chain, and it's happening sooner than most people. We go into a lot of detail on how this is all going to work. We talk about the crypto economics of proof of stake, MEV, staking derivatives, and how development works in a global project like Ethereum protocol. We cover a lot of ground in this conversation. I hope you enjoy it as much as I did. I thought today we would spend, the time talking about the merge, the Eth 1/ Eth 2 protocol merge and the tiering protocols made a lot of progress in recent months. First week with the beacon chain launching and more recently with EIP 1559 going up. And my sense is that the next big focus is going towards the merge. And so I thought a kind of easy place to start before we dive into deeper waters would be to briefly touch on what is the merge and why are we actually doing this?
Danny [00:01:50]: So the Ethereum system exists. The Ethereum system launched many years ago. Now you actually mentioned probably the two most substantial upgrades that have happened, like engineering efforts that have happened since the genesis-versus the creation of the beacon chain and the working of the 1559 fee market which is great. Really got some momentum going; a lot of exciting work being done, and now we're prepping for the merge. There's these terms, Eth1 and Eth2 which represent really just different parts of the system that we're attempting to architect and modify and upgrade. Each one is the chain that we know and love. It has a proof of work consensus mechanism, and then it has the contents that we really care about as users, which are apps, contracts, accounts, transactions, all that kind of stuff. And so then we have this other thing we've been working on Eth 2 and Eth 2 at its core is really at the beacon chain and all of the features and things that we can do at the beacon chain. The beacon chain is a proof of stake consensus mechanism that we launched last December. It is really exciting. It comes to consensus on. Itself. But it's primed to come to consensus on other things. And so, I mentioned Eth 1. Eth 1has really two components. We care about the proof of work consensus mechanism, and then all of the valuable user layer, execution layer items. And really what the merges is, the removal of that proof of work, consensus mechanism. So all we're left with is all the valuable things to users, and the hot swapping of that in the live setting for the new consensus; the beacon chain. So essentially the beacon chains coming to consensus on itself. The proof of works coming to consensus on all the use of layer stuff chugging along. And at some point at the point of the merge, the beacon chain will then come to consensus on all of the execution layer, usually our stuff. And we will leave proof of work, energy; hungry proof of work behind once and for all.
Sina [00:03:53]: Yes. And this has been such a long time coming. I feel like one of Danny and I's first conversations before either of us were working at it during the foundation was around proof of stake and at these Ethereum meetups and this being this distant thing in the future, and that's really what we're talking about. Moving from proof of work to proof of stake, and maybe also to touch on why this actually matters, what changes it results in. Before we go into the more technical details, Danny I've heard you say that it changes security, sustainability and scalability.
Danny [00:04:34]: Those are my top check points. Yes.
Sina [00:04:36]: Yeah. I'm stealing your…
Danny [00:04:37]: Yes, my...
Sina [00:04:38]: But what are the highest order changes on each of those dimensions?
Danny [00:04:46]: So yes. Eth 2 is really an attempt to make the theorems and census more sustainable, secure, and scalable while retaining decentralization. It's really easy to do those things if you make a kind of what we would call, a very critical sacrifices. But we want to do that while retaining decentralization, retaining some of the core properties that we care about in the system. So the move to proof of stake, the merge, gets us the first two; the sustainability and security. It is more sustainable because the crypto economic consensus mechanism relies on essentially, bonds in the form of either capital put up as the asset at stake, as opposed to the accumulation of physical hardware and burning of energy; which is essentially the analog and proof of work. And so to participate in that crypto economic test this mechanism proof work, you just have to burn tons and tons of energy to prove that you're dedicating a scarce economic resource to protocol. Whereas it proof of stake, I signed a message and I put essentially capital at stake, and participate in the crypto economic consensus mechanism. So I don't need to burn tons of energy, which is great. So more sustainable, yes that's it.
Sina [00:06:01]: So instead of actually solving the quote-unquote proof of work puzzles and getting X number of zeros at the beginning of the hash, now you're literally signing a [06:16 inaudible], it's the energy usage is negligible, right? I, mean, it will fall by 99.9% .
Danny [00:06:21]: Yes. And the core is that these are crypto, as I said before, these are crypto economic consensus mechanisms. And so really, they're proof of dedication of scarce resource to the protocol. And the scarce resources, energy and mining power and proof of work. And to prove that over and over again, you have to literally burn energy. And to take this native crypto asset and provide it to the protocol and have it at risk is much easier. I just pretty much enter into a contract. I say, I'm going to play this game. I signed a message and then if I play the game, well, I make money, and if I play the game poorly, I lose money.
Sina [00:07:00]: Is there some sort of an interesting chicken and egg problem here of the crypto economic value of the collateral depends on the system being secure, and the system being secured depends on the collateral having value?
Danny [00:07:14]: Tim, what are your thoughts on that one?
Tim [00:07:16]: Yes, I think that's a great point. And I think Ether having value is definitely a prerequisite to moving to proof of stake. The absolute kind of current control cases if with Eth was more zero, you couldn't do proof of stake. And clearly, whatever Ether's worth today, we can do it and it's kind of a gradient from house basically. as the value increases, so does the security assurances of the system and the costs to attack the system. So it might not be a true relationship forever that more value is better. But there is definitely some minimum amount of value that you need. And if you compare this to say you see in proof of work, this threshold of value is usually being the dominant coin for your hardware class. So if, for example, you're like GPU coin, such as the Ethereum, you want to be the biggest GPU coin because then any other GPU coin that's smarter than you can get attacked by just a fraction of the hash power. And, similarly, if you're running your ASIC, you want to be like the biggest coin for that particular ASIC. And when you translate that to proof of stake , you want basically the economic value that securing the system to be large enough to secure how much value there is, and that's the system. And Ethereum is interesting in that it doesn't only secure Ether, but it also secures basically every application that's built on top of it.
Sina [00:08:55]: And so, how does that shake out? As there are more and more applications built on Ethereum with stable coins issued, Fiat backed stable coins and really the sky's the limit of the amount of value that can be on top of the Ethereum...
Danny [00:09:12]: Fortunately, there is a value accrual mechanism that translates usage of Ethereum into a really fundamental value of Eth because Eth literally is burned when other things use the platform. So there's actually a lot of, I don't want to get into the ultrasound money thing, I don't want to get a pump the bags thing, but the native fee market mechanism, having Eth very much entrained, actually helps at least add to the security that essentially you don't have a bunch of freeloading apps in terms of leveraging the security theory, but not really adding back to it.
Tim [00:09:48]: Yes. And one thing I'll add to that. So talking about EIP 15 gets not into fee burn. One way I like to put at this, imagine you explain the theory to somebody who knows nothing about block chain, and you're like, there's this block chain, which has applications that run on it. And the applications can build coin and hold economic value. Anybody would assume that there is some mechanism by which the platform captures some of that value. Maybe they'll have an intuition for how it works, but it's just like you explained, that to somebody and their initial reaction or thought process will be like, well the more applications there are on Ethereum.
The more valuable the system will be. And they might not be able to say how, but they'll have this intuition and before 1559, that actually wasn't true. More applications on Ethereum didn't necessarily make the network more secure. And when people paid high transaction fees, these basically go to minors, which are block producers with expenses denominated in Fiat.So they need to sell those coins in order to pay back those expenses. And I think a large part of the reasons why the network has so much value is over time, you've seen Ether kind of become this unit of account or store value for a lot of applications. And you've seen this initiative with ICLs in 2017.They held their treasuries mostly at Eth for a long time and then you've seen it with e5 that has started to use Eth as a collateral. So it's like we had some mechanisms by which there was demand for Ether and that generates value, but we didn't have a mechanism that translated more usage for the network into basically less supply. And obviously, again, there's a lot of caveats around raising the price and what not, and there's a lot of volatility associated with cryptocurrency prices. But in general, if you think of fixed point in time and you have the current supply , times, current price equals the market cap and you remove from that supply, you're obviously kind of going to increase the value of the remaining supply.
Danny [00:11:57] : And I'm almost never going to be talking about, especially in the short term, like price. I'm talking about value, intrinsic value. Companies often will have clearly the building intrinsic value, but their share price can clearly map in the wrong direction for 10 years. I'm not claiming price when I'm talking to that stuff and the move to proof of stake. Also instead of minors, essentially, even without 1559, the block producer has access to fees and has access to anything else that it can be valuably produced by producing a block. And now that right goes to, or at least at the merge, that right goes to people that hold and lock up Ether. And so, that's also kind of like helps create the reason to have…
Sina [00:12:47: Another source of demand for holding Eth and participating as a value.
Danny 00:12:51] : I did want to jump back a second and say maybe the proof of stake break, kicking off that consensus mechanism is circular in a sense. And maybe it's a bit more circular than proof of work, but I'd argue that it's very much circular and proof of work as well, because why were the early people mining Bitcoin? Because they wanted Bitcoin and then what's the by-product of them mining, making the network more secure. And then there would be literally no reason to mind the first block or the 10th block, unless you wanted the Bitcoin for some reason…
Sina [00:13:24]: There's these people sitting on a corner mining, some made up imaginary currency on their own, and no…
Danny [00:13:30]: There’s definitely this bootstrapping thing where somebody has to want to do this because they think it's valuable, but it's not valuable because no one's done it yet. But the more people that do it, the more secure and then maybe it's more valuable. But I think some sort of like infinite loop kind of Gödel Escher Bach thing, where eventually it is the system in and of itself because it kind of going…
Sina [00:13:54]: It makes me also think of many things of the world work this way. It makes me think of sapiens almost. And the idea of these imagined orders that basically only exist in people's minds, and if they don't physically exist in the real world. The idea of money or a legal system is not a physical reality of the universe. It only exists because enough people believe in it. And if enough people stop believing in it, then it stops being real. And so, there’s this problem of how do you bootstrap these sorts of shared hallucinations to enough coverage so that they become self-sustaining.
Tim [00:14:32]: And one thing I think that's interesting about a theory of there, is the fact that we did start with proof of work I think was really valuable because you do see a very different community around mining than around say users or holders or whatnot. So to take your analogy, Siena, it's almost like it bootstraps this idea and a more as larger community's minds that we're able to get this kind of stakeholder group but that's part of the community and kind of bring them in early on. So yes I really like that.If Ethereum has had this proof of work period in its life, and I think it's brought in the community.
Danny [00:15:16]: Yes. Longer than expected
Tim [00:15:19]: yes, that's here. Yes.
Danny [00:15:20]: When Sina and I first met, I probably thought proof of stake was coming in a year and that was in 2017 or something. I was very naive.
Sina [00:15:30] : Well, I guess this idea of proof of stake, just having a larger footprint in terms of the number and diversity of stakeholders that are participating in it, is part of the reason why it's more secure and more decentralized, right? Anyone can participate from their own laptop anywhere in the world versus, proof of work where you need to set up a mining farm. And that itself leads to the network being more robust.
Danny [00:15:58]: Yes, there are certain many trades certainly many trade-offs in different design decisions on how these systems can be constructed. But I think one of the core reasons, I would say it is more decentralized and really the counter to the proof of workers love to say it's just the rich getting more rich. That's very naive interpretation because these are crypto economic consensus mechanisms. The whole point is you put up capital in some form and you get a reward. And if you can make that function very pure and not have a disjoint sections where if you put up more capital, you get more reward as a fraction, then that's better. We were operating in assumption that these are crypto economic system mechanisms; you put a capital, you get a reward. And so in proof of work, you can put up a lot of capital, you can get very entrenched in supply chains, you can get hardware sooner, you can build custom hardware and you can actually per unit capital, get more reward. Whereas in proof of stake, it's a much more pure function. It's like this very liquid asset you can go to any local exchange, you can probably talk to your neighbor and [00:17:05 inaudible] from these days and then you choose to stake or not, and you can utilize super hardware. And so pretty much like it's after the very small fixed costs of, some bandwidth and maybe an old laptop, like you are able to turn post capital and get a wonderful reward. Just the same as the big guys.
Tim [00:17:26]: Was going to say that you just mentioned, you thought it would be done in 2018. Maybe it's worth taking a minute or two and explaining what's changed since 2018, because I think that the part around reducing the minimum stake is something that's probably underrated and that other proof of stake systems that our lives today don't necessarily have. So yes, why did a theory have to take these three extra years to shift proof of stake? What did we do in the meantime?
Danny [00:17:57]: Right. So in 2018, we released an EIP called EIP 1011 hybrid proof of work /proof of state Casper FFG. And that was going to utilize proof of work as the block proposal mechanism, and then add a proof of stake finality mechanism on top, and then eventually get rid of the proof of work buck proposal mechanism, and have the stakers also propose blocks. The minimum stake requirement was going to be 1500 Eth, and this is due to a number of reasons. One, I think there was a strange engineering decision to utilize the EBM for the core smart contract. So there was some amount of efficiency loss there, but some constant, maybe two, three X. Then around the same time when that was being specified, Justin Drake and metallic realized that these like cutting edge cryptography, BLS signatures, which allow for a signature aggregation could make this system way more decentralized. And so that design after much work after clients were really.
I actually beginning to put tests together, it was scrapped, for much orders of magnitude, better design, but one that was going to take awhile clearly. And that reduced the state minimum state requirement, from 1500 to 32 Eth and still had the same economic parameters that targeted the similar amount of Eth, so 10 million, 20 million total state. So now instead of there being, hundreds or early thousands of validators on the system, there are hundreds of thousands of validator entities. Each entity is exactly 32 Eth, so if I wanted to participate with 64, I'm two entities in the consensus, but nonetheless, I can participate with as little as 32. And I know many people that do on that kind of short range of 30 to 64, et cetera. Granted , that number feels less and less small, in USD absolute terms, but even then it's been a mega boon for decentralization. And on the five-year time horizon, when you have global bandwidth increase, when you have the base requirements or computers continue to increase and that kind of stuff, you could imagine cutting that in half when it kind of meets a similar requirements of today. And ideally that continues to be cut at least a few times, but we will see.
Sina [00:20:27]: That, makes me think of one thing, which is through the timeline of the Ethereum Eth2 and Serenity and the various names that it's had. I've personally actually been impressed by how the research community has actually decided to postpone where it makes sense because multiple orders of magnitude improvement as possible. And to the outside, this has seemed at points okay, there's just no progress, they're not shipping. But to me, it always seemed like these are actually sensible decisions being made if this system is going to live, you know, who knows how many years? Hundreds of years, thousands of years into the future.And more recently, I'm sure you both being involved in the engineering implementation process as well. Me reading some of the discussions and the EIP is that feels like this trade off between, being idealistic in the design and being pragmatic is almost like honed to a craft of its own in this process. Like with the merge EIP, it really does the absolute minimum necessary, and there is going to be another EIP right after it to clean up some of the mess that's left behind, even those two pieces have been decoupled from each other.
Danny [00:21:54]: Yes. Another thing we're also often seeking in design and especially upgrading existing systems is simplicity. Getting consensus is very hard, even with just a single client implementation; we have many client implementations. And so, often it'll be you could imagine a feature that's really cool or like makes things a little bit better, and you're like, no it is not worth writing that down and trying to get everything to be in sync. So we're also like, even though maybe it doesn't seem like Ethereum system is complicated, seeking minimalism as well. But Yes Tim, I was going to ask you what you thought about the craft of.
Tim [00:22:39]: Yes. I think we reach simplicity, given our constraints. Given our constraints is kind of the interesting part. for example, like the typical All Core devs EIP process is somebody comes up with a new idea for a great feature and then to present the Alcor devs, and then it gets shot down because it's a denial of service sector, that's like the canonical thing. And then, they go and they basically patch up their feature with a bunch of edge cases, they make sure that it's safe and that intermediate spot you usually get something that's quite punky. And ideally that gets iterated on and fixed and we then come up to a new implementation that's usually is simpler and addresses the security concerns.
But it does seem, the fact that there are these considerations are around security that are much higher than your typical software project and the fact that it also happens in public, I think, make it seem much more messy than if you think about just normal tech projects or tech companies. And one example, I think of it's if you think about an iPhone or something like that, apple does, it show you the five years of arguing and having the prototypes open with, the circuit board showing and for better, for worse with Ethereum, of all of that is public. So I think it gets very easy to see the messy part. And then, when it's actually done, I don't know, a lot of times maybe people don't pay enough attention to the end result and saying, oh now it actually works. It's like the goalpost has been moved. Yes.
Sina [00:24:27]: So, as a semi outsider of this process, as someone who follows it from a distance, to me, it seems like just complete miracle that a decentralized, a group of people who are working in different organizations, and are working remotely, getting together, talking asynchronously and on these calls, are Making progress on one of the most complex systems in existence and trying to upgrade it in real time.
Danny [00:25:07]: Yes, I'll just agree. It is a miracle.
Tim [00:25:10]: Yes.
Sina [00:25:11]: I've seen engineering teams where there's a leader and they're all like working in the same room and they're trying to do something ambitious and that's already impossible; it's very hard. So what's happening here?
Tim [00:25:24]: I agree with that. It is a miracle and I guess my first thought. Again, the fact that it's open doesn't make it seem worse than it is, but like, it makes it seem as it is. And I don’t know, I'd like to compare if there is a pretty big system now; it manages billions of value, if not trillions, if you count everything on it, just imagine you're saying an engineer working on something like Gmail or zoom, right. Those systems are also incredibly complex and they need to. The user interface that's presented as much simpler than what's going on behind the scenes.And I think Ethereum is just the behind the scenes is visible for everybody to see throughout the whole process. And hopefully we keep the complexity small, but at some point it's like when the system has to process a ton of things and work at scale, there's going to be some complexity.
**Danny [00:26:19]:**I think it's not only visible, but I feel like if you opened up Gmail development or you have Linux development; people use Linux, it's very important, but most people don't follow it or give a shit. Whereas you have this massive community of people that follow all, like it's their sitcom, their reality show is watching all of this happened in real time. And I think that's because of, I guess the monetary kind of relationship with the platform and also because people very much really believe in this philosophically. And so, not only is it open, but everyone watches.
Tim [00:26:56]: That’s true.
Sina [00:26:57]: I mean, it's also just really cool to be able to observe something like that. It's not even limited to Ethereum core development for me, it's a bunch of projects in this space where you can just hang around on the discord and some of them are more open than others. And you can literally see a very interesting, ambitious protocol get built in front of you. And you would only have that sort of a front row seat if you were inside of a traditional company, usually.
Tim [00:27:26]: Yes, I agreed. And I think my one goal with regards to this process in the medium or long-term is I do want it to seem less than less exceptional. I think it was Rick Dudley, had this comment a while back saying if your process requires genius every time the work, well, you don't have an engineering process, you have an artistic process. And I do think that's like a fair criticism of if they're going to development that, and we can't have that today. There's just so much moving parts to the system and so many big changes coming, but I think over time it would be awesome if we were in a spot to where the process was a bit easier for people to follow and to contribute to, and perhaps required less of these artistic displays
Sina [00:28:19: Yes, I don’t know, its part of the charm. There are a lot of artists around.
Danny [00:28:25]: So something I want to go back to is the how, and Tim started touching on this. One, everything is viewed; every potential changes if you'd incredibly with security first mindset. And so it's really easy to cut out ton of the crap, and to prioritize because of that. We're always going to prioritize something that makes the system more secure than a feature if we have to. And then the second I think is that there is like a very strong Ethos and philosophy and this which it is coupled also with the fact that the system isn't quite what we needed to be yet and so there's this drive for progress. And so yes, there are incredible people that lay down to do some incredible research and lay down roadmaps and things like that. But the willingness to kind of bite on it and continue forward, is really like this Ethos of this isn't done. This is, we have something good here and we can make it what it needs to be, but it's not quite there yet. And so you combine that with the security Ethos and I think that's kind of what keeps us.
Sina [00:29:37]: Right. And it really is a long time Verizon. To this point there's been a lot. And even from today, looking forward, there's many years worth of very interesting and hard impactful, technical problems that need to be solved, with the merge and way beyond. There is a bunch of other stuff after.
Danny [00:30:01]: To mention, we're in an applied cryptography Renaissance where every six months there's this radical new technology that when you combine it with crypto economics just seems like literal magic. And so, the stuff that's going to happen with virtual machines and zero knowledge proofs and stuff for the next few years, it's like, it's a lot of stuff coming.
Sina [00:30:21]: Yes. So I think maybe, let's take this transition to start talking about the merge specifically. What is this actually going to look like? What are all of these people that we're referring to working on now? And over the next months as we get to this milestone?
Danny [00:30:38]: I think this is probably valuable at this point to discuss the separation of concerns and layers and how this is kind of a happy accident that's happened. So what we call Eth2 or the beacon chain has been architected in relative isolation from the existing chain. And what has happened on the existing chain, like the proof of work, consensus mechanisms is very stable, It's very fine if you go look at the diff across the GAF, the past five years, they probably haven't touched it very much at all. But all the optimizations has been in this execution layer, running the EVM very efficiently, managing state, doing sync, all that kind of stuff. So we have very sophisticated pieces of software to do this.
And then in parallel, we've created very sophisticated pieces of software to come to do proof of stake consensus with hundreds of thousands of participants, all sorts of cool stuff going on there. And so we have these teams and these pieces of software that are actually pretty specialized. And so the whole framing of the merge and specifications, and even how it's going to look in engineering, leans into this separation of concerns and such that we have the consensus layer or the proof of stake consensus layer. It has its own specifications and we have a link into the execution layer that has its own specifications. And we also have two pieces of software, essentially think about guests, you have Gath, all the EVM and the state and stuff. And then you have this proof of work brain, that's driving it. It's saying, hey, this is the head; this is a new block, that kind of stuff. We can take that brain; we're going to do brain surgery, or maybe we're going to keep the same body and switch the brain.
But you take that brain out, and instead of listening to proof of work, we can listen to a beacon chain client, and we can actually keep these as two pieces of software that run in harmony together. And the beacon chain software essentially was driving the execution layer yes. Or another mind or any of those are their clients. And so, what it looks like is teams on both sides of the aisle coming together, creating the link between the two .That EIP you saw was really the link between these; the proof of stake consensus layer and the existing execution layer. And so now it's really like refining that link, ensuring that the core of the execution layer that we know in love, like state sync and that kind of stuff can continue to operate with this new brain and, bringing it all together and testing and doing tests, nets, and beat it hell. And then going to go into launch. There's a lot of other details in there, but that's at a high level.
Tim [00:33:16]: And maybe one thing I'll add about why this is possible and desirable is, at the very beginning, Danny mentioned that, I forget your three key words, you have sustainability, scalability of the third one security…
Danny [00:33:33]: [00:33:33 cross-talking] know this talking terms.
Tim [00:33:35]: Exactly, yes.
Sina [00:33:36]: It' a good one. That's hacked into my.
Tim [00:33:38]: Yes. And then, this merge gives us the sustainability and the security. And a lot of people look at the original roadmap of E2 and there's a big focus on the other S like the scalability. And one thing that's been really interesting to see in the past year and a half or so is the rise of roll-ups on each one. So I think as the work on the beacon chain kind of happened, we launched a beacon chain and that's what life was great.The original roadmap was well, then we'll do sharding. And then eventually we'll add computation to those shards so that they can predict your scalability. But we're starting to see scalability just happened on the base layer that we have today, via roll-ups. And, that kind of simplifies a lot of design where if we only need to take care of these two S's for now, we can do to merge and also rely on all of the roll-up teams to kind of scale Ethereum for the users and eventually we can get into that later. We'll have shards and that'll help kind of make prolapse cheaper, but that the kind of general work around scalability can kind of be outsourced, to those teams and that's a super valuable development that I think is sometimes underrated. Because it's yes,
Danny 00:34:56]: To say that
Tim [00:34:57]: Protocol teams, yes.
Danny [00:34:58]: I like to say that the roll-ups are buying is time so we can carve out those letters. Dresses.
Sina [00:35:04]: But it feels like there's almost two kind of big aha moments that happened... One was the realizing that the existing e-tron clients and these two clients are actually like complimentary and can really be coupled together in a really nice way.And the other was basically realizing that we have roll-ups and we can switch the order of the merge and sharding. And then even when charting happens, the first part of that is going to be data availability, shards that aren't intelligence can execute things, and that also couples. So there's like two kind of big realizations. And I remember the roll-up centric kind of realization where when Vitalik, I think talked about it at an EA global event and was talking about it before a little bit as well. But how did the two clients-- and realizing how they could interface with each other--how did that come about?
Danny [00:36:10]: I did write a new research post that talked about clients separation and leveraging these components. And it's something that we probably had talked about for some amount of time before, but for me it was really just there were some proposed roadmaps that essentially scrapped the EVM, scrapped all of the work that Gath and never mind base you have done for years. And it just like that did not seem palatable from almost any perspective. For one, it's much more of a forced migration, two it's like we have these incredible experts who have walked in experts and we're just going to try to scrap their clients and try to move on and so much of that. And so really that was the motivating factor on me. It was like this all needs to come together in a much cleaner way and that was that pushed in that direction. But, there were glimmers of that over the past couple of years, but it really started coming together. I think with that, the post talking about how these clients can be separated. Because at that point started doing it from guests, started doing smart [00:38:15 inaudible] and did like a concept along with Mikhail to show that, oh, you can actually do this. And that really picked up steam from there.
Sina [00:37:24]: And well, maybe let's spend some time on just talking through how this double client architecture actually works, because I feel like it's pretty interesting, especially given that the transition is going to happen within a live network. And So, I think the latest designs that I saw, and I may be wrong here, but there's the beacon chain that's coming to consensus on its own state. And at some point you basically take the state route of the execution chain and you put it into here. And it's a first class citizen inside of the beacon chain. And these two clients are going to be running on the same machine and communicating with each other via RPC protocol. What are the brushstrokes of how this is actually going to happen?
Danny [00:38:21]: So yes, that's the core essentially upon some condition probably at some sort of terminal total difficulty on the work side, the validators on the beacon chain pick a final proof of work block, reference it and build upon it. So essentially the valuable user layer payload, which is transactions, is shoved into a beacon block.As one of the items that is coming to being come to consensus on is also the post-stay route, which is essentially like the Merkel digestive of what the state did.And the way that works from a engineering perspective, once say, let's operate, we can talk about the actual point of marriage, but the merger has happened. The validators on the beacon chain, they're called upon to build a block and they… this is the beacon notes, this is the consensus layer side.
And, I have an execution engine running, which was, it was an [00:40:24 inaudible], one node running in conjunction with me. It knows about state.It's been managing the transaction pool, pretty much everything in the user layer. And I'm called upon to produce a block and I do my regular validator things. So maybe include some out to stations, maybe include a couple of validator deposits but then I get to the execution stuff. So essentially I want to create a valuable payload of user layer, execution layer payload for this block. And so I'm going to say, hey, local, never mind instance give me a valuable payload. And it's very equivalent to, right now, there's a on the Jason RPC, there's a couple of proof of work mEthods. So one is called get work and one is submit work and it's essentially give me the hash, and this is what third-party miners build their software on.
Essentially it says, hey guest, give me the hash of something valuable to mine on like a bunch of transactions and then it minds on it and then if it finds a solution that submits it back. And so it's kind of similar in that I go hey give me a valuable payload and it's going to use very similar logic to the get work thing where it's essentially bundling a bunch of transactions doing a few computations on it, actually running, giving the post state route, gives it back to the validator, which is on the beacon node side. And it puts it into its beacon block, payload signs it, and Brock does it to the network. And that's kind of the core of that functionality that pretty much the ability to ask for a valuable payload is the core functionality and to also insert payloads.
So I'm a validator and I didn't propose it this slot, but I see a block came in and I'm the brain, I'm checking the proposer signature. I’m checking some of the outer consensus layer stuff, and then I get to this execution layer thing, which is transactions stuff with related to 1559 base fee. And I say, hey, Execution engine, you're really good at this stuff, tell me if this is valid. So I pass the payload over, to the execution engine and guests or another mind, open it there. It says it runs the computations, just like it would run a block today. I check the post state route. It insert it updates its local state and then it essentially returns to her false like, hey, was I able to answer this or not? And that just becomes an additional validity condition on the beacon block itself.And then that state route is embedded in the beacon state. And so if you think about the Merkle tree, the outer beacon state, you have the validators and you have some historical stuff and you have the execution layer state route. And if you dig too deep into that, it actually goes all the way into all of the Ethereum states. So It's funny, the realization here is really what did we build? We built a consensus mechanism called the beacon chain. Let's come to consensus on stuff with it.
Sina [00:42:17]: Right. Yes, it makes a lot of sense. There's just the idea that you're coming into consensus and it doesn't matter what you're coming to consensus on. And then, whatever you come to consensus on then you use some state machine to actually make meaning out of it.
Danny 00:42:37]: I said this before, but it's been an incredibly happy accident to have specialization, to have both in teams and expertise and software itself, and we might even see more of it; I think we've actually begun to see some of it with flashbots. What is MEV Geth , it's Geths. And then this modification of this thing that they control, which they specialize in, which is the management of the mempool in a more sophisticated way, and the creation of very valuable blocks .And they even want to try to carve that out and make it a more modular piece of software so that any client can leverage it, And so that all of a sudden you then have consensus layer, execution layer, and maybe even like transaction MEV layer.
Sina [00:43:28: Right. So MEV Geth is a piece of like, if we were going to fit it into this model, it's a piece of the execution side of the puzzle. And it's basically taking a mempool as an input and creating a valuable block and then passing it into the consensus.
Danny 00:43:48]: Yes, and additionally they have some more sophisticated market mechanics where rather than just submitting transactions, people can submit bundles of transactions for being picked up because obviously how you tie transactions together can have different MEV properties. And so that's really the specialization. There is the modification of Geths mempool to support, bundling, and to support a market for bundling.
Sina [00:44:14]: Yes. When, on one of the things we were going to talk about was MEV as well. So maybe let's just jump into that now, how does MEV evolve with the merge and actually, I mean, because we've also just had 1559, maybe there are some changes that are also going to play out right now, but how is MEV going to change in the next year as a whole?
Tim [00:44:41]: As a hard general question, the answer, but I'll try.
Sina [00:44:45]: I’m just throwing gate to the wolves.
Tim [00:44:47]: I’ll try and narrow it down. So starting with 1559, there's not a ton of impact, with regards to MEV and 1559. The biggest one is just that every transaction on the network needs to pay the base fee. So that means that, you just need to re-architect MEV bundles where prior to 1559, and you could have say, your first end transactions pay zero [00:46:10 inaudible] and then the last one pay the [00:46:12 inaudible] the minor, or something like that. Now, every transaction needs to cover at least its own base fee. So it doesn't change kind of...
Danny [00:45:24]: It changes a little bit. So, one of the design goals of iterative MEV work is MEV minimization and 1559 does do some of it. So it does reduce the magnitude of what a proposer can extract from a block, which MEV minimization is maybe a good target for multiple years to help with core security stuff, which we don't need to necessarily get into right now. But it does modify a little bit.
Sina [00:45:50]: How does it do that? How does it lessen the amount of MEV that can be?
Tim [00:45:55]: Because if it's less than the base fee, basically.
Danny [00:45:58]: Pretty much.
Tim [00:45:59]: Yes.
Sina [00:45:59]: Because some of it is being burned and isn't directly accessible by.
Danny [00:46:03]: Right. Previously you got all of the transaction fee and so you got MEV plus transaction fee, obviously the way market mechanics work out, you might not get all of that, but, yes.
Tim [00:46:12]: That's true. That's interesting though, because especially people, when they think about MEV everybody brings up the hundredth MEV blocks; you see those on Twitter. But I was looking at the flash spots data and it's something like 80 or 90% of the MEV transactions are less than 0.1.And so, there is a very large, long pale of small MEV so, yes that's a good point. I'm curious how much of that is actually smaller than the base fee. And, I think with the merge, there's a couple of things that change. The first is in a short time in advance who is producing the block. Whereas under proof of work, that's not something that's known, right? Every block is basically random.
Sian [00:47:02]: And you know this at the beginning of an epoch?
Tim [00:47:05]: Yes.
Danny [00:47:06]: Yes right on at the zero slot of an epoch, which epoch is 32 slots. I know the proposers for that entire yes. That is...
Tim [00:47:16]: Yes. So you can kind of know exactly like what, who will be the block proposer. The other big change is the set of block proposers as much bigger than the set of current miners. Like Danny mentioned earlier any 32, Eth validator can be a block proposer and I think that's something where, probably in terms of MEV design structure, you're going to have to see the most changed. How do we go from contacting, 5 to 10 mining pools to contacting potentially a hundred thousand validators, and just handling that? And the other thing I think is really interesting is again, we mentioned that validators have a 30 to Eth stake, and if you want to stay more than 32 weeks, you basically get multiple validators. So you might start to see cases where you can know in advance that a single entity controls two validators who propose two consecutive blocks. So say, I don't know some exchange or some staking product validators that happen to be the proposers for two blocks in a row. You might be able to have multi block MEV strategies where you're lowering the price on some decks in one block and then picking up something on the other block or something like that. And I think that's a huge design space, that's not explored yet. And won't be necessarily a frequent occurrence. So I suspect we'll start to see just what are the biggest MEV opportunities possible and perhaps see those exploited across two blocks yes.
Sina [00:48:58]: Yes. So with this transition to validators producing blocks, how is MEV going to change? So on the one hand, that's a very positive development because it's democratizes MEV, it's just every validator, anyone who becomes a block proposer will get some MEV. And so, what are the dynamics going to be there? Is everyone, do you imagine there being, MEV tools that people run locally or will there be, for instance, services that you're subscribing to a service and when it's your turn to be a block proposer, this service just gives you an MEV bundle and you know, you give it a cut because of it figured it out for you and you take the rest how has all of this going to shake out?
Danny [00:49:56]: That's kind of what happens today, increasingly. So with flashBots and the way their market works, and we'd expect very similar port of that infrastructure into the proof of state context. And I think a lot of that, although the number of participants is much higher and so some of the mechanics might need to change a lot of that is likely going to operate in a similar way, at least for at least at first. And so that although very sophisticated mechanism, very early in terms of MEV markets, so there's a lot of room for improvement, but that does help with the democratization of MEV. Because otherwise, you might expect very large pools that have R and D budgets to make MEV; they can get 10 times the MEV of the home hobbyist and all of a sudden, you see massive centralization factor because you can be more profitable. And so, these open markets are very critical over time and fortunately one does exist moving into the proof of stake world. There's a lot of work to do on them though.
Sina [00:50:58]: It's a centralization vector because you just kind of accumulate more ease over time and you set up more validators and your share [00:51:05 cross-talking]...
Danny [00:51:05]: Or I'd have a disincentive to be a hobbyist, and I might want to go with Coinbase because Coinbase has an MEV research group that is always on the cutting edge. It's kind of like high-frequency trading, you either can play the game or you can't and if you have a ton of money, you can play the game, otherwise you just cannot play the game. And so if you want your capital B with high frequency traders, you have to just give it to them, you can't just do it at home whereas if you have free market of [00:51:30 inaudible] being sold essentially to block producers, then you minimize that disparity there.
Tim [00:51:39]: Yes.
Sina [00:51:39: Yes. How, does the Ethereum network look in terms of who's actually running these validators and notes? For example, there is a pretty strong hobbyist community yes. Well, what a person who is running a home staking setup, like who are they? Are people just kind of run, setting up a server rack at home and running this, or what are people actually doing there? And what percentages of the network are these different groups?
Danny 00:52:09]: So, first of all, huge props to the east acre crew and the work they've done over the past year, year and a half, to enable the hobbyist community. Obviously that's been a huge design decision from a fundamental level to be able to target low quantities and things like that. Both, they did a lot of the legwork to actually help the community learn and grow and understand hardware and build guides and all that kind of stuff. So that hobbyists actually could be enabled. Who are they and what are they running? Hard to say, I know some of them, they seem like regular people. They're often running on nukes, maybe it's called nukes, which was up just kind of small, small PCs dedicated. Sometimes they have sophisticated setups where they have a universal, like a backup power supply in case things go down and that kind of stuff. And some of them are even experimenting with raspberry pies.
So the Ethereum arm guys are running a couple of validators on raspberry pies, which are like very less than a hundred dollars devices that are very resource constrained with success. And they plan on being on all the emergency chestnuts to ensure that we can actually run fully merge client and a couple of outfitters on that hardware, which is pretty sweet but it ranges. It's interesting because I know some very technically sophisticated hobbyist validators, but I'm also aware of just talking to people on the internet. There's a lot of people that like , this is by far the most technical thing they've ever done. So they're Ethan enthusiasts, they've been sad to say, for a long time. And then they just rolled up their sleeves, read some guides and jumped on our command line for the first time and got their stuff running.
And fortunately, shout out to [00:54:58 inaudible], there's some really great guides on setting up firewalls and kind of getting past just the basic client configuration to get this stuff running. But I don't know. It's been pretty cool to see that grassroots effort. I think the counter to that is that I'm looking at least the breakdown of known deposits. And we're probably , although there's like whales and seven here, hobbyists probably make up more than one third, but probably don't make up more than 40% today. And then, so we're looking at something like 60% are some much larger players, some big ones, so crack and, oh, actually, there's a classification on this website. I'm looking at here, I'll share with you, the called whales, which is 9.38% and I assume that was an exchange. So actually, if you include Wales as hobbyists, which is probably fair, they're independent actors, we might exceed maybe 40, 45%. It's hard to say what's in this, this blue block of others. But we have some exchanges which offer staggering surfaces and then we have some staking Institutions, that's like their primary thing. And a big mix around here, obviously some larger ones and then many smaller actors and then this.
Sina [00:55:22]: So what is how and either of you feel free to take this, but how do we think about the incentives of someone re staking on their own? Why would someone actually do that? Given all of the work that it involves?
Danny [00:55:37]: So it's not that much work. One it's fun, I mean that's not a good enough reason. So any of these taking institutions are going to be.
Sina [00:55:45]: It is fun. And that's a good being especially in a new emerging thing, being a part of the community is a very positive thing. I would say that's...
Danny [00:56:00]: It's very powerful to just learn and do, and be a part of this decentralized thing. Obviously, I don't think we can, there are incentives for reason to help incentivize people actually doing this, but for the first hand while that's a pretty powerful incentive in itself, but any of these large players, that you could put your capital up with and stake, they're going to be taking some amount of fee. And so it's a trade off. They might be taking 25% of your staking returns, which maybe that's worth it. But there's risks associated. So being we have this thing on Eth2 on the beacon chain called slashing and that's essentially, if I do something provably, cryptographically nefarious, which essentially comes down to contradicting myself.
Following the protocol, you'll never contradict yourself you're allowed to change your mind and be back later, but you cannot forget that you made a different decision. But you can be slash for these provably nefarious things. But the amount that you're slashed is actually related to the amount of other people that were slashed recently, because you actually have to have at least a third of the network, often times even like 50% of the network to create a network fault to create two viable histories.So if I'm just slashed in isolation, I run my note at home and I got hacked or I do something really stupid with management of my keys, I'm going to be ejected from the validator set, but I'm not going to lose that much money if it's not correlated with other people.
Whereas if I go to an exchange and then they're 20% of all the capital being staked and they have some sort of insider threat or a hack or do something, where they have a mass slashing, they're actually going to lose. You lose the percentage of the network that was recently times three. So such that if one third of the network was slashed , you'd have maximum punishment of a hundred percent. So if 20% of that work was slashed because of some massive exchange hack or something like that, they'd lose 60% of their capital. And so there's not only that fee, there's also a lot of other considerations in, and if I were to select or if I did select an exchange and which exchanger, which institutional player I chose, there's, it's not just pick the biggest one with the best reputation.
Sina [00:58:20]: Yes, I always really liked that idea of the amount of slashing being correlated as to what portion of that work is getting slashed Because there are other reasons I would balance that with, okay, I'm going to run this staking setup at home. There's always a chance that I mess it up compared to a professional, doing it.If this big reputable exchange messes up, there's a chance they'll make me whole because their reputation is on the line. So there are, I think it will be, it's probably hard to think about how these things shake out in the short-term and it's more what they converge on over time, especially maybe after things go wrong a few times and people start to see what [00:59:09 cross-talking].
Danny [00:59:06]: We call them anticorrelation incentives and there's a few others there, with respect to liveliness and some other stuff, but the problem is they're penalties and tail risk scenarios. And we are very bad at assessing, the probability and impact of tail risk scenarios, and so, I bring them up a lot because I think they're very clever I think they're good incentives to have in place, but I think that it's very difficult for the average person to assess the judgment call that those anticorrelation incentives may be like, should push you. And so you're right, you mentioned maybe you have to see some of the bad things happen before, people actually are able to make the decisions with respect to them.
Sina [00:59:51]: And what about, staking derivatives or let's say I'm most familiar with Lido, which I actually quite like the way they've progressed on the protocol. My understanding of how it works is that they're on the one side, they allow you to deposit into their system and on the other side of it, they kind of funnel this into multiple staking operators. So, you're not kind of taking just a correlated risk with everyone else who's under their system. and also, just having followed their progression for a bit, I, sense that their vision is to build a truly decentralized system over time where anyone can kind of become a staking operator. There's reputations involved, it evolves into a middleware layer and others might be doing similar things.
Danny [01:00:56]: I call it like they're not a staking pool. They allow for pooling, but they're more of a staking middleware. I think that's, staking tokenized, middleware that does round robin allocation to underlying staking providers for a diversified risk pool for us taking derivative that has yields.
Tim [01:01:18]: I think it's worth noting the pros and cons, especially today there is still a fairly high element of trust in Lido today because of what the protocol allows them to actually decentralize. And I know the team has been working on some proposals for say withdrawal, credentials and whatnot, that that could help make it a bit more decentralized. But you do have this extra trust assumption and this extra cost. So I forget what the exact percentage is, but obviously LIBO like any others, they could provide or we'll take a cut of the rewards. And the upside is, one, if you don't have 32 weeks, it makes that accessible. And you kind of had your risk with a single provider like Danny mentioned. so it makes it easier to get access to basically staking rewards without having that minimum amount. And two is yes; you do get the derivative token back, so to state ease, which you can then use. And that I think will be interesting to see in the medium term, yes, how much people value the fact that they could use the stake token to do other things, versus just leave it in the begin chain alone.
Sina [01:02:36]: So what are the kind of second order effects of this? If more and more of the steak moves to systems like Lito and what sorts of ramifications does that have for the network at large?
Danny [01:02:54]: It's complicated. It certainly has security implications because you could potentially leverage these assets in different ways a while. So hedge against essentially an attack that you might be conducting in another context. I think that withdrawal delays, which exists essentially, there's a cue on exiting and being able to get capital out does protect against a lot of this. If you could instantly join staking, get some asset [01:03:26 inaudible] it and simultaneously conduct an attack and also get the assets out. I think that would be a much more, bad situation, but I don't know, I need to think about it more. There's a lot of people thinking about it, but second derivatives definitely complicate the naive security assessments.
Tim [01:03:49]: I do think, yes there is something good and that it is middleware where the people who would have to conduct the attack are not necessarily the people who deposit the Eth back to the people who run the validators. And you might be able to have a system where say they put up a bond or something, or to be fair, a lot of them are actual legal entities. Especially, I believe in Lido's case, most of them are known companies and whatnot. So their incentive to, it's not just some random whale can come in to Lido, put in a hundred thousand or a million Eth and run an attack and then withdraw it immediately. So I think the more layers and delays that you add in the system, you probably reduce some of the risks or at least increase the coordination costs to have a successful attack. And, and that's valuable.
Sina [01:04:46]: Yes. Very interesting and yes, I also feel like at least in the early days, there is a way in which if your, staking directly or more directly through a staking pool, you feel that you have the skin in the game. If something goes wrong and you get slashed, you lose funds yourself directly, whereas through a system like Lido, that's aggravating, those sorts of things are socialized across the system. So you feel it less, maybe you wouldn't even realize that there has been a slashing because whatever, you’re, just not seeing the incremental changes.
Tim [01:05:31]: Okay, so I will say diversity. So these things will exist clearly. Lidos Danny was very popular today. Lido unfortunately is pretty much like the only option; diversity on chain staking pools and on the spectrum of decentralization or there's many different aspects you can decentralize with respect to the types of systems you were built. And I would hope to see massive amounts of experimentation in this. And I think that if you're listening and you're looking for a business opportunity make competitor to Lido.It'd be healthy I think for everyone to see that.
Sina [01:06:11: Yes. It's, interesting. This pattern that seems to keep repeating in this space where there's an early leader and no one else kind of attempts to do the same thing. And then the...
Tim [01:06:22]: To be fair. There was another early leader in this case. I think rocket pool...
Danny [01:06:26]: They were never a leader. They never launched.
Tim [01:06:29]: Yeah. And I think this is a really interesting case that he was talking earlier about like pragmatics versus idealism. I think a lot of the rocket pool seems to have taken the approach of wait until the protocol allows you to be sufficient PB centralized to launch. Whereas Lido was like launched with whatever's available and hopefully decentralized over time. And so, yes, I think hopefully we do as more and more functionality is enabled and there's kind of a bigger design space, we see more people coming in and experimenting with that.
Danny [01:07:05]: It's funny because on the L one, people sometimes argue somebody else is going to come and [01:07:08 inaudible] launch , which maybe it will. We'll see. But they're at launch early and with design flaws and with the consensus mechanism that wasn't perfect proof of work and is working on iteratively, making it better. And so in that sense, I think that often saying you know, Ethereum's not out there, making moves is wrong because they kind of did the Lido thing and that they launched as early as possible. Well, it was honestly, they're running out of money. It was time to time to go.
Tim [01:07:45]: Yes. Good thing, we didn't wait for proof of stake, right?
Danny [01:07:48]: Yes. And I'm, still eager for Rockpool to launch, no negativity there, but I would have hoped that it had launched already. And I would hope that there are many other interesting options coming. This is the development called secret shared validators, which leveraged some of the cool properties of BLS, signatures, and thresholds signatures to be able to take a single steak. So a single validator and actually split control of it amongst a number of entities that have its own kind of safety and liveliness properties and controlling that validator. And these have actually become relatively sophisticated, they're on Testnet now they've been battled, they've been hammered on them for awhile.And I think that actually opens up a lot of interesting companies and maybe interesting dials and things to build with these. So I do think we're going to have a second wave, towards the end of this year of some interesting options coming out in this layer and I'm super excited.
Sina [01:08:55]: So the idea being I guess, one of the value propositions of a system like Lido that we've been talking about is you could have less than 32 Eths and participate in it. And this has kind of built baking that into the protocol where a validator is still going to require 32 Eths, but multiple people could come together to put that together and governance and
Danny [01:09:18]: Yes. Essentially, you have a consensus mechanism for your validator that then participates in the consensus, which there's a lot of different, ways you can design that, but there's this R and D group working on secret share validators there on Testnet and you can do it today. So there's a few different things you could do with it.You could join with three friends and split it down the middle and do a two or three on secret share validator each for you runs the node. And they come to consensus, and you know and trust each other fully. You have live in those properties, if one of you goes down, But other reason you might be a staking institutional staking provider, and find it actually a more secure setup to split your keys in parts and have many nodes and redundancy and stuff. So it's actually, it opens up [01:10:03 inaudible] landscape.
Sina [01:10:07]: This is a more robust way of building a system where you have backup keys because right now , that's one of the main reasons people get slashed, right? Is they
Danny [01:10:18]: Don’t put the same key in two places and tell them they're in charge; it's a bad idea. Go offline for 10 days, rather than trying to make a redundant setup.
Tim [01:10:29]: Actually that's something people have asked me. And I think that is, if you take it to three minutes. A lot of individual buyers like, well, what if I move? I'm moving from New York to San Francisco and my validator is going to be offline for two weeks. Should I even bother setting one up, can you talk a bit more for like an average user? What are the conditions under which they're expected to be profitable and yes.
Danny [01:11:01]: So assuming the network is finalizing, which it pretty much always is, you cannot take that for granted. If AWS goes down and we find out too many validators on AWS, they're all going to lose a bunch of money because it's not going to finalize; they're all going to be correlated. Don't be on AWS. Be uncorrelated, rented a home. But assuming the network's finalizing when you're offline, you stand to lose it's actually about three quarters that you would have made online. Just call it one-to-one for easy. So, if you're offline for a day, you lose a day of profits. Literally not just opportunity costs, your balance goes down a little bit.
Or if you're online for the day, it would have gone up late, early. And so what you can think about if I'm offline for a day, I come back online. After one day, I've gone down a day; I've gone back up a day. I'm an hour, I net zero. So if I was offline for two weeks, I'd be online for two weeks. It'd be back where I started. It would amount to four weeks of not having a return for that year. So, four out of 52, so that's like a 7% or 7% reduction in your total profitability for the year, but you're still definitely profitable. Even if you're offline for half the year and you came back online, you'd be like net zero for the year. Try to be online more than that it, as long as you're not correlated with other people, even a lot of other people going offline. And so, you're not with a big pool or you're not on April, yes, with a big outage, like you're not going to really lose that much money. And so if I was offline, I knew I was going to be offline for two weeks. Two weeks is on the range of that would kind of hurt. I'd probably put in the amount of time to maybe power down, get a non AWS cloud instance and run for two weeks while I move. But you need to be careful that it's powered down my node, use export this slashing database, power up my cloud instance, then power down my cloud instance, power. I know you never want your keys running in two places at once. And so in the two week time horizon, certainly in the month time horizon, I'd be like, okay, it's worth making sure I'm live in the middle, but if it was five days that was the move then I personally would not power. I would power down my validators and I would turn the back on a new location. Because you know what, 10 days out of the 365 in the year, I'm looking at not losing too much money.
Sina [01:13:42]: I think I saw you somewhere else saying that there's a new feature called doppelganger detection. Is that relevant at all here?
Danny [01:13:51]: Yes. So this came out of, I think dipper and super fixed both had this idea around the same time and super fixed coined the term doppelganger.
Sina [01:14:03]: Like all great ideas that arise [cross-talking 01:14:04].
Danny [01:14:04]: Yes, equals MC squared. A few people did that one and so doppelganger deduction, coined that term coined by super fixed is now a feature supported by at least a few clients. I know it's supported by Nimbus. I know that in the 1.5 release on lighthouse, which is coming out soon, it will be supported. And it's a default where I turn on my node and because safety failures me, double signing is way worse than liveliness failures. I actually don't like my node turns on and I sink, but I don't start signing messages. I just listen a section for a second. I listened for two or three E-box and I see is anyone else signing messages with my same key?
If so shut down, If not for a few bucks, that's a good sign that, I'm the only one running these keys. I didn't accidentally turn on my cloud instance while my home; Nova's was still running. And I continue on without any issue.Usually like a CLI that would be "-- on safe and capital letters, disabled doppelganger detection, XYZ, 1010", like something that's really hard to type. That if you know you're the only one running the keys, you can bypass the two epoch weight and not deal with it. So, yes there's even in the UX of not being slashed, especially for hobbyists, things are getting better and better.
Sina [01:15:27]: Well, gradually bringing this to a close, I guess I'm kind of curious to go back to that question of what is happening now until the merge. And so we know that there's two pieces of software that have a relatively clean way that they're going to interface with each other. Is the work now for these different clients to build out this shared interface and this shared spec and then, set up a Testnet and go from there? Are there any kind of big unsolved problems that need to be tackled in the meantime? What does that kind of future roadmap look like?
Tim [01:16:08]: Yes. So one thing that's, really promising in the direction of the merge is we tested this back in May. So when this idea of combining the currently two clients with one client kind of started to get traction, we want it to be sure it actually worked in practice. I personally was pretty skeptical.
Danny [01:16:30]: I knew it was going to work.
Tim [01:16:32]: Yes. There was this much long hackathon organized by a program called [01:16:37 inaudible] yes. And the goal there was just, can we get this to work on a Testnet right. Just hack together, all of the clients from Eth1 , all of the clients from Eth2, make the minimum set of changes and get this work in, and then it actually did work, so then it was right here. We got every permutation of Eth1 and Eth2 clients. I think except one of Eth1 clients, but basically the 12 permutations worked, and they were running on a network together and producing box. So that was awesome. It kind of validated that this architectural design is sound and we should go with it.
Sina [01:17:21]: So people are running Eth permutation as a separate node, and they were all coming to consensus together on these [01:17:26 cross-talking]?
Danny [01:17:28]: So we have to come up with a bunch of different hybrid names, like a guesthouse and a...
Tim [01:17:36]: Yes...
Danny [01:17:36]: Prison mind, you know?
Tim [01:17:39]: Yes. So I think that was kind of the big technical de-risking, does this approach generally makes sense? Over the past months, both the Eth one and Eth2 teams were pretty busy because we had London going live and then all tears about the GoLive. And, I don't think it's been scheduled but soon…
Sina [01:18:01]: And [01:18:02 inaudible] is a hard fork on the [01:18:04 cross-talking]…
Danny [01:18:04]: So it's the first beacon chain. Hard fork has some nice to have features, but it's also just like putting in place the ability to fork this, the system live before we have to fork it with a trillion dollars of value behind it.
Tim [01:18:17]: So, yes, now we have a ton of engineering work to do, and Danny shared this checklist of the chart, which basically goes over it.But it's really kind of getting this to production readiness, getting this tested, figuring out. So there's a few small kind of details. We still need to figure out then decisions that needs to be made, but the general architecture is set and now it's just getting to a spot where it's production ready. On the execution side, given that London is out, this has now kind of shifted to the main focus and kind teams are starting to work on it. And similarity, once all is out on the consensus side that most of those teams kind of work on it. Does their mind main priority as well?
Danny [01:19:03]: And I would say yes, all the normal things like refine of specs, writing of a lot of tests nets, and all that kind of stuff. But I think my estimation is the long tail and this is testing and security. It's having tests, that's run for long enough, it’s hammering them with all sorts of stuff. It's like turning half the nodes off and not finalizing for three weeks just to see what happens, all sorts of that kind of stuff, because it's not like a continuous read is really the value of the beacon chain is securing, is ratchets up quite a bit. And so getting it right is very important.
Sina [01:19:43]: Are we experimenting with any new mechanisms around security and auditing and bounties and that sort of stuff?
Danny [01:19:53]: So I would say we should triple the bounty program at least tomorrow. And that's a conversation for another day, but I think that ratcheting up the battery program makes a lot of sense to me. I think there's way the F the capital that you have has for this kind of thing and what we can catch now, and that is way better than what we can catch five months from now. So that we actually, we have a couple of new red teamers that are focused very much on the merge, these guys are onboarding now. They're super good and are just working on breaking things. It's still have our fuzzing infrastructure on both sides of consensus, layers, fusion.
There we've been talking with a couple of people that are looking into fuzzing, more creating these deterministic networks and hypervisors and fuzzing actually the network rather than individual client. So there's a lot of stuff going on. There's another guy is actually looking into doing network load test, so like actually put a huge budget around creating pretty large networks distributed across the world, and hammering them with things or making slot times really short and that kind of stuff, and just kind of seeing what shakes out.So there's a lot of stuff. All the basics certainly be there and that we're going to get some cross client testing, infrastructure testing, vectors in place, and we're going to be building test nets and hammering them manually. But lots of if you're interested in this stuff and you have an idea on a security proposal to help with the merge, like holler.
Tim [01:21:31]: Yes.
Sina [01:21:32]: Yes.
Danny [01:21:33]: Alone.
Sina [01:21:34]: And how does it feel to be working on this? I feel like there's something special about the merge coming up and this particular hard fork in a way that, I'm always excited about the developments, but this has been a long time coming and, it's one of these moments that we've been seeing from years ago.
Tim [01:21:52]: I like to focus. So I though maybe this is working out each one, but a lot of working of the protocol execution layer is like, we have 30 IPS to sift through and try to think through all of them. And how did they overlap but not overlap. And, it's kind of refreshing with the merge. It's this one thing. This one target and it is huge; it's bigger than the sub of probably those 30 EIP we usually look at. But yes, it's really refreshing to have one focus and having so many teams aligned on it, yes, I like it.
Danny [01:22:33]: Yes. I mean, bringing proof of stake to Ethereum main net is all almost the only thing I've thought about for years to be quite Frank. So it's exciting. The one hand I'm it's just pretty normal because, what I've been doing for many years at this point, but it's also actually with the launch of the containing last year and actually seeing it come to ahead, has we get the merge ready? I'm excited.
Sina [01:22:59]: Well, have some good celebrations. I feel after this all .
Danny [01:23:03]: Yes. I definitely celebration order.
Sina [01:23:07]: We maybe need to just organize a Devcon around it. Who knows? Maybe the stars aligned. I hadn't actually thought.
Danny [01:23:16]: Yes. I know.
Sina [01:23:17]: Will.
Danny [01:23:20]: A lot of uncertainty and global travel, but I can tell you one thing, the merger is happening.
Sina [01:23:25]: Yes cool. And maybe as the last thing, if people are listening to this and wants to work on these problems, on the protocol layer, where are the most interesting problems? Where would you encourage people to look? And we've talked through some of them. The most recent one that came up in conversation was around security and really making sure that this network is going to be secure. But what are the other areas that are top of mind for both of you?
Tim [01:23:58]: So, I guess to answer your first question, where should people look? There's Eth research has kind of a random list of posts about problems and solutions. So if you're totally new and I don't know, just want to get a feel for where things are at. That's maybe a good thing to skim and see if anything there interests you. We also have an R and D discord where there's more kind of synchronous conversation and kind of iterated conversations happening about most of the stuff that's discussed there. So if you do want to contribute and get involved in joining the discord is probably a good place in terms of the problems themselves.
So I think we discussed that on of the ones with the merge. And like Danny said, he's been thinking about this for the past three plus years. I think there's a lot of big things we're going to need to do in the next three years after the merge so that's also a good area for people to come in and start contributing. Off the top of my head, the sharding implementation is one so getting shards up and running is a big area of work where more people will definitely make an impact. My personal kind of pep problem is state growth. So I think once, with 1559 and the merge, I think we're two thirds of the way to, Ethereum of being sustainable that if we didn't touch it for 10 years, we'd probably be good. The last part that's missing is state growth.
So right now there's kind of no limits on the rate that's, which the state gross on Ethereum. There's been tons of proposals over the year to address this. The gist of it it's hard to do because Ethereum is already live, it would be very easy to fix this problem if we were starting from scratch on a new block chain and there's more and more concrete proposals that seem realistic. And, there's a lot of value in having researchers who have engineering skills start to actually prototype these and see what breaks, and see what the edge cases are. So yes, if I could direct people to one that would be, it would be state growth. . And, you'll be busy for the next three to five years.
Danny [01:26:15]: And this is more abstract, but just the doors are wide open. Just show up, there’s interesting problems, work on the problems, ask questions, fix typos on GitHub and it just spirals out of control. The next day you're going to be talking to your crypto hero. I mean that honestly, if you're interested in this stuff, just jump in. There's literally an infinite amount of work to do.
Sina [01:26:42]: Yes, it's unexpected. I'd imagine that just going to the Ethereum discord, In the main channel, there's, you know, six people going back and forth and they're literally the people building the future of Ethereum. And you could just join the conversation right there.
Danny [01:27:03]: Yes. And, again, I'll let go. I said before, security and testing. If you have a skill set that you think is relevant to getting this stuff done, knock on my door, we can find something for you to do.
Sina [01:27:16]: Well, we can call it here. I think this was really.
Danny [01:27:19]: Sweet. Yes. Thank you so much for having us.
Tim [01:27:22]: Yes, this is great.