Skip to content

Latest commit

 

History

History
381 lines (206 loc) · 76.8 KB

Meeting 179.md

File metadata and controls

381 lines (206 loc) · 76.8 KB

Execution Layer Meeting 179 [2024-01-18]

Meeting Date/Time: Jan 18, 2024, 14:00-15:30 UTC

Meeting Duration: 99 Mins

Moderator: Tim Beiko

Meeting Notes: Meenakshi Singh


Summay

Dencun

  • Teams are satisfied with how Goerli went: issues have already been triaged, fixed, and new releases shipped. We are therefore keeping our Sepolia/Holesky schedule. Client teams should have a release out with activation set for both testnets by early next week. See EIP-7569 for epochs/timestamps. CL spec was just released, too (see ⁠announcements).
  • We tentatively agreed to move forward with this JSON RPC PR (ethereum/execution-apis#486) once (1) we've merged the 4844 naming changes (ethereum/EIPs#8095) and updated the PR to reflect those, and (2) ideally have @RyanSchneider (or another future user!) confirm that the JSON RPC PR looks good.
  • We're good to deploy the 4788 contract to mainnet. First one to do this and post a txn hash here gets their gas refunded.

Prague/Electra

  • Lots of conversation on the call, won't summarize everything here. If you care about what's going in the fork, make sure you watch the recording!
  • EL teams agreed that we should aim to do a fork in 2024, which likely means excluding Verkle and having it in the next fork. That said, there is likely bandwidth for one "harder thing" in that fork.
  • All EL teams agreed that 2537, 6110 and 7002 should be prioritized in the next fork. I've drafted a Meta EIP to reflect this, and given 2/3rd of these EIPs also involve the CL and we didn't get a formal +1 from them, I've put all 3 in the "CFI" bucket for now: ethereum/EIPs#8121
  • On the next ACDE, we'll discuss the 3 potential "big things": Verkle, EOF and 4444. If you have specific questions/points you'd like to cover about those, please raise them on either the call agenda (#943), here, or with the champions for those initiatives.

Dencun Updates, Goerli Fork

Tim Beiko 1:07: And we are live. Welcome everyone to ACDE 179. We have a lot of stuff today. Hopefully we can get through it but first we'll talk about Dencun. Obviously go back on how Goerli went and then there's a couple small changes. So are at these items to discuss so some things around JSON RPC some things around 4788. Shia linked the new CL release PR and then making. Sure we're still on the same page about timing for next testnets. Once we have that done, then there's a bunch of discussion around the next Fork Prague/ Electra. There were a couple specific EIPs that people wanted to discuss. So we can cover those and then I think it makes sense to hear from all the different teams what they think about the upgrade as a whole and what people's like priorities are. And then lastly on the next upgrade the Verkle forks want to do kind of a deep dive on the next ACDE. And one thing that would be helpful is collecting feedback today on like what people want to see and discuss so that they can best prepare for it. And then at the end hopefully we have time for this but there was a Besu mainnet incident that we'd like to discuss briefly. So I guess to get us started I don't know Pari or Barnabas do we do have you want to start with like a high level Goerli recap? And then if any client teams want to chime in we can do that as well.

Paritosh 2:55: Yeah I can go. So we had the Goerli Fork yesterday around 6:30 UTC. And immediately after the fork we saw that there was a drop in participation. I think we went down all the way to about 40%. And on triaging we narrow it down to one client being offline due to a failed upgrade and one client issue itself. The client issue was hot fixed soon and I think someone from the prysm team can talk more about that. And once the hot fix was applied the network did exceed 70% participation rates. And then we were seeing finality. During the non-finalizing period we did submit some blobs they made it through perfectly fine and post finality we started blob spamming as well. That went on for about 7-8 hours and based on that we've written an initial analysis. That should showcase how the blobs Feds in terms of propagation time and what effect they had on blocks themselves. Besides that overnight we didn't run any other block blob tests but we started the spammer again today morning.

Tim Beiko 4:12: Awesome! Thanks. Yeah is anyone from Prysm on?

Terence 4:20: Yeah hello hi. I can give a brief update on what we're run. Yeah so what happened was that like from prysm upgrade from Capella to Devnet Stake. We kept the historical route empty instead of carrying it over. So this issue was like fairly easy to fund. We basically patched it the first few hours and then we released a high image and then I believe the capella Stake sorry updated devnet was finalized about at 2 am Pacific time. So about like four hours after so just give a little background right so he's Historial Route is this legacy field. It's essentially Frozen at capella. It was used before capella so it was the old field and then what you didn't get like surface before the previous devnet or Shadow fork was that we need to run enough before we essentially need to run enough at the previous fork to essentially have it as nonzero. And then this is only updated for every like 24 hours. So say today if you start from ballat trips and you hard fork to Capella less than 24 hour. If the time is less than 24 hour this is going be zero. So that's essentially what happen before that's why we didn't catch it. And then spec test has also didn't catch it so we immediately surface that to shway and the things for shway there's a PR on that so essentially the spec test will cover that. So this is just one of those very rare as cases. Thank for for Goerli otherwise we probably wouldn't have until later.

Tim Beiko 6:00: Got it thanks. And I guess this is also something we can not only catch in the spec test but when we run Devnets. We could explicitly have like a fork happen you know once the sort of far enough from the previous one to allow.

Danny 6:18: Yeah or the Genesis Stake can just have a value in that field because mainnet always will have values in that field and So Genesis Stakes on our Devnets just might as well. But yeah it it was a failure this should be caught way Upstream and just kind of the the weirdness of the deprecation of that and not thinking through the fork boundary test all the way. We didn't have one in the consensus spec test. So that is there as of the release that'll go live right after this call. And we can spend some time making sure there's nothing else funky on Fork boundary tests although Goerli probably would have anything else like that.

Tim Beiko 7:04: Got it. I think this was the only sort of major client issue that we saw on Goerli but do any other client team want to share anything notable or interesting that they saw during the fork?

Terence 7:27: I had one issue that I surfed it's also minor that like pris are seen client team so this is consensus so so like kind of apologise bringing here. So like Priysm async client team requesting blocks sorry blobs for every index no matter what. So say today if you have a block that doesn't have any Case commitment we're getting requests for the block and it's slightly wasteful. I would say yeah I've served. I think it was I don't want to point name but I think it was a Teku or something like that was doing that. So I think teku team is aware of that.

Enrico 8:09: Yeah so we do in some cases request blobs but it should not happen in that case specifically. So I'm just double checking later today but I think in the case of the block has zero commitment we should not do that. So I suspect there should be something some other Peer or something running over the network that does that but I'll double check any case because we do something similar.

Tim Beiko 8:44: Awesome! Thanks and I suspect this would be an easy fix once you identify it and to discriminate based on the commitment yeah any other comments thoughts by client team?

Paritosh 8:59: Seems one thing I did want to ask was right now we're artificially spamming Goerli with blobs and everything's kind of being submitted to the same set of nodes. So the propagation statistics will also look as if they've been submitted to the same set of nodes. We tried stopping for a bit this is not much Native blob traffic on Goerli. It would be really nice to know when L2 start a plan using it and when we should stop spamming it.

Danny 9:32: Do we expect L2s to use the Goerli blobs given the deprecation or are they going to focus on another testnet.

Terence 9:44: I don't want to speak for a bit strong but I work fairly close to them. I don't think they will use Goerli. So the likely testnet the most likely one will be Sepolia after them.

Tim Beiko 10:04: And if not L2's yeah I don't know if there's another group that we could get to like spam Goerli and and hit a different set of nodes. I guess if you're listening to this and looking for something. Oh Proto?

Protolambda 10:25: Yeah so we are looking at testing in Goerli starting with Layer 2 Devnet. Once we go past like internal Devnet work only then we can go to like more production testnets and then you know like towards the development of you know final stretch for mainnet testing. Now spamming blobs, yes we can generate some traffic on a devnet without argue that not to get real testing. We want to update the public layer to testnets and may take some time.

Tim Beiko 11:13: Do you have a feel for like how long it would take you?

Protolambda 11:21: So we consider our testnets as a more production environments where users, devs rely on ability of these networks and it's the choice to go and update the devnet first. yeah this means that we aim for one or two weeks of testing on devnet before we go and try any updates on testnets.

Tim Beiko 11:51: Got it. So it's like at the earliest it would probably take another month for Goerli to be updated.

Protolambda 12:01: It's at least a few weeks away like the other timeline that's interesting here is how much a Layer 2 should take for the public Tesnets and how much the Layer 1 here accommodates. Since a stable aragon target here is important for there are to governance and for upgrades that you know do not rush a proof change or a change that you know affects the Layer 2 security. So the longer running the roll out period is of Layer 2 the more important is to have sufficient Layer 1 test time to prepare for a mainnet.

Tim Beiko 13:04: And I guess do we see a risk that 4844 is somehow inadequate for L2s like obviously you can imagine it being on mainnet for like months before you know. And L2 chooses to use it there. That's probably fine. But like I guess yeah what's like the minimum amount of confirmation we want that this is quote unquote usable and do we sort of already have that because I know there were some.

Protolambda 13:41: In terms of capacity I think very close what I'm more concerned with is tooling infrastructure around blocks many RPC providers. For example they serve the JSON RPC but not the beacon layer RPC. And so a lot of infrastructure that has been built around layer 2’s realize on the JSON RPC but not on this new HTTP and it serves the actual block data. So there's there's more in to updates on the outside of the Layer 1.

Tim Beiko 14:19: Rright and I guess you know that's I would definitely not block L1 unlike the ecosystem adopting because they can add support for it whenever but I guess my question is more like is there something about the consensus change itself where we feel like we'd need additional validation from L2s or some other like stakeholder before moving to mainnet or should we try and keep moving towards mainnet as you know quickly or efficiently as we can. And then it's I guess yeah it's better I think to be in a spot where like it's on Mainnet. And people just have to adopt it in a spot where it's not live because we're waiting on some external dependency. But yeah I want to be sure that's like a reasonable assumption and there's not something on the L2 side that's like we may be missing yeah. And Ansgar has a comment in the chat that there's an 4844 “L2 Readiness Roll Call next Wednesday. So that they'll bring that up there as well and they can report back on Thursday's ACDC. So we should probably discuss that further there but yeah I guess for now does anyone see like we should get L2s to test this as as quickly as possible and so if optimism can do it on Goerli in the next couple weeks. That's great. Once the devnet has been done if for Arbitrum we're waiting on Sepolia then we should probably get Sepolia upgraded and then if there's other L2s. Now on this call listening yeah if someone wants to start testing Goerli sooner that's probably just some valuable data for us to obtain.

Paritosh 16:25: Okay so my takeaway from that would be that we continue spamming but rather than a target of six we'll just have a target of three. So that if someone else wants to join in then they're not priced out.

Tim Beiko 16:37: Yeah that makes sense.

Paritosh 16:40: Okay and just one more thing. I do want to mention these are alpha numbers and we kind of want to know what client teams are looking for. So that we can start customizing these sort of analysis like every week or something from the data from the last week right now. It looks like in the average case blobs are propagated on network under the 2 second Mark. There are some P95 values at the 4/ 5 Second mark, but in general attestation deadlines seem to be hit. And it doesn't matter how many blobs we have it looks like block propagation not affected which is good news. And yeah in general all the numbers look quite good so far. So yeah I guess the question is what what else do we want to collect from here is something missing. There's a question just. So Potuz’s message the builders being used regularly on Goerly as well. We have MEV boost enabled in at least all of our validators. We haven't tried sending blobs directly through them but they have been gossiped by our nodes but we can try sending it directly to the relay.

Tim Beiko 18:23: Oh, Okay and there's a comment saying flashbots has delivered some payloads with blobs. Nice. Okay any other thoughts from client teams or others about Goerli specifically.

Barnabas Busa 18:47: Very Very quickly after the non-finality was over.

Add eth_blobBaseFee; add blobs to eth_feeHistory execution-apis#486

Tim Beiko 18:50: Oh! Nice! Anything else? Okay so next up we had this JSON RPC discussion a few weeks ago on the testing call about adding blob gas prices to the a few JSON RPC methods. We were hoping to have more discussion on it but it does seem pretty stale. I guess yeah Lightclient you are the last person who's commented. Do you think we should go ahead and merge this does this need more input this feels like something that would be nice to have obviously before mainnet and potentially before one of the next testnets. So that we can expose the Blob based Fee JSON RPC.

Lightclient 19:54: Yeah I think for one thing we had this PR to 4844 one and a half weeks ago trying to rename gas price to base fee with respect to blobs. And I think we probably want to move forward with that and so if we do that that's going to change. The name of the RPC method from blob gas price to blob base fee. So that needs to be updated but in general this seems like something that we need to be doing. I'm just flying a little blind because I'm not sure who the consumers of these things are and they're not really stepping up to say like this is what we would like this to look like. So I'm kind of just following what has existed in the past. And moving in the same direction with the The Blob fees. But you know for instance I don't remember if this was the testing call or the last call but Roberto was saying for them they're just calculated the blob Bas fee using the previous header. So people are kind of getting around it. So the question is like if people are getting arounded. Do we need to expose this RPC method? I'm not sure. It seems like the right thing to do but I'm just waiting for someone to say this what we want.

Tim Beiko 21:12: So assuming we go ahead and and merge that PR with a name change on 4844 then like we change those names maybe like somebody to ask I know Ryan from Infura has been in the Discord following a lot of these Devnets. So like Infura might be a good. Basically consumer of this if anybody else has thoughts. It's PR 486 in the execution APsI repo. Is there anyone who like, sorry. Go head.

Lightclient 21:50: Oh sorry I was just going to agree with you and I was wanting to know if anyone was opposed to 8095 the changing the name from GasPrice to BaseFee. Because I'll just merge that in the next hour if not. Great.

Tim Beiko 22:12:Okay so yeah let's merge the PR to 4844 and then maybe like do one last ping on the Discord and explicitly tag Ryan about the JSON RPC 1 and assuming there's no objection we merge that one as soon as it's updated with the new names. Sweet and then okay next up.

Deployment of 4788 contract on testnets & mainnet.

So there was a comment by Andrew about deploying the 4788 contract on all the other networks. So we deployed it on Holesky and Sepolia this week. And we weren't sure if it makes sense to deploy the 4788 contract on mainnet now or if we'd rather wait to see the other testnets Fork before we do that I don't know if anyone has thoughts about this?

Danny 23:22: Given deploy address is a function of the code. I think it's fine to deploy now and might as well. Deploy now meaning in the event that there is some last minute change to that code due to functionality change or bug or something it would deploy to a different address. So we don't like shoot ourselves in the foot by already having something there. So given the manual deploy I'd say well top of mine just do it.

Tim Beiko 23:26: Okay. Does anyone want to volunteer to deploy it? I will refund your gas cost if you do so. I guess if first person who goes ahead and deploys it can post in the All Core Dev’s channel. The transaction hash but yeah let's try. Yes Mario. I'll send back the gas cost of the first transaction someone who deploys it and posts it in All Core Dev’s I will not double it Though.

New CL release: release v1.4.0-beta.6 consensus-specs#3578

Sweet next up Hsiao-Wei posted this in the chat but there is a new CL release being cut that. I don't know if Hsiao-Wei is on the call or otherwise Danny any thoughts comments on that.

Danny 25:08: This was expected to go out Monday. It did not and then the issue on Goerli came out. So we got in additional test vectors and now it is immediately ready to go out. After this call it has the fork Choice filter change. It has a number of other minor things which are not really substantive with respect to this and it has the additional test. The intention would be that this is the target going forward with the fork Choice filter change being the main thing to have to accomplish but also knowing that if on the next testnet you don't have that done except in pretty crazy exceptional scenarios. You won't diverge. This has been discussed quite a bit on call. So I don't think there's any additional context or discussion needed.

Tim Beiko 26:10: Got it. Okay so I think those were all the sort of outstanding items around Dencun.

Sepolia & Holesky fork timing, see: EIP-7569

In terms of timing so on the last ACDE last year we'd set the schedule for all three testnets Goerli’s are already done Sepolia is currently scheduled for January 30th and then Holeskyis scheduled a week after that for February 7th. Our plan was to have a single client release for both those testnets. So that users can just download it once and be ready on both Chains. And we only have to have like one sort of recycle one announcement and so on. If we want to stick to that and give people at least a week to upgrade on Sepolia it means we'd have to have client releases and the announcement out sometime around like Tuesday next week. Does this feel realistic for all client teams? Does anyone have a problem with that schedule or should we yeah keep moving forward and announce Sepolia and Hoelsky around Tuesday of Next Week. Okay take who saying it's okay. Yeah and no one objecting. So I'll take that as a sign that we're good as some plus one from Nethermind. Okay last call otherwise yeah otherwise we'll move forward. And obviously if we see something go terribly wrong on Sepolia we can always cancel the Holesky Fork but yeah I think for this will simplify things a bit for users to have both. So great okay let's stick to it. All the numbers are in EIP 7569. So the Epoch on Sepolia is 132608 Epoch on Holesky is 29696 and then the corresponding time stamps are linked in that table. Anything else on Dencun at all. Okay sweet.

Prague/Electra Proposals

Tim Beiko 28:48: Then moving on to the next Fork Prague/ Electra. So I compiled all the proposals earlier this week that at least had been put up so far. There's three of them that specifically or three people that asked to talk about four things specifically today. So I think it probably makes sense to go through those first and then hear from all the client teams what they're thinking in terms of prioritization both in terms of specific EIPs but also this higher level conversation around where Verkle fits in. And you know how we potentially want to approach the next two forks rather than just the next EL Fork. So yeah to kick it off we have I I don't know if Foobar is on the call? Yes okay. So I'll post your update in the chat but yeah take it away about 374.

Foobar 29:50: Awesome! Does screen share work for presentation or okay let me get that going. Okay, perfect great to meet you everyone I'm Foobar may have seen me on Twitter first time, kind of touching the core Dev stuff. So appreciate you bearing with me have done a lot of sorry your presentation is like stuck at loading. So awesome all right we'll share and can all right we got the links perfect. So yeah, a key high level thing is I think there's a massive opportunity for a core ux level upgrade that both increases EVM dominance relevance and saves literal hundreds of millions of dollars for users at the same time. I wrote up a more in-depth article here that you can dive into but the key idea is that batch transactions for EOAs add a whole swap of benefits. They fix stale approval attacks which have taken 100 million plus from retail including three separate attacks in the last month. They reduced a lot of developer overheads speaking from personal experience of people not having to embed quirky native multi calls within their protocols also reduces it's a bit of free lunch on the state growth side of things. Currently you've got all this overhead for mempool propagation signature verification for sequential transactions when really you just want kind of one signature across the whole thing. So able to get more actions more chain usage without any additional client node burdens and also able to remove some of the assumptions we've got on trusted intermediary like private relays and what not. So yeah the key most of this is just focused on the user and Dev experience side but do have a bit at the end where we can dive into the specific EIPs although. I think there are people more skilled than me to discuss that. So quick example on stale approvals and how that goes that goes bad I think that the approval pattern in ethereum is good for some things but has been overused and gotten a lot of people wrecked as the result. So for something like a long-standing Opensea ask I want to sell this NFT for one EIP. That's a good usage of approvals for something like I just want to put my asset into a protocol like I want to swap tether for Eth on Unisswap. Then it's not actually desirable on the app side or the user side to have to constantly be having to do this approval for to be stale left open a lot of apps have this dangerous infinite approval pattern. And we've seen this get exploited time and time again just last 30 days. For example you had the NFT trader exploit two-year-old contract several million dollars worth of assets drained because not because they were actively participating in the exploit just because they had done an action two years ago. And forgotten slash didn't have the bandwidth to revoke same thing with fork protocol. And then just yesterday or two days ago Bridge you had a lot of people get their assets drained from stale approvals on a bridge. And these are I picked recent relevant ones but these are things that happen over and over again. And so these are frankly all fixed with a bad transaction pattern. So I think that there's a lot of user leakage and like Ethereum experience Improvement that can happen from that. The huge on protocol UX side. I know that natively enshrining multi call is something that can be done. Most people don't know how to do it. In my work on audits you kind of have to manually push people to get it there. I think coalescing around a single standard for the entire space makes a much better devx. Yeah state growth as we said you've got fewer transactions floating around than the mempool for for the same level of actions less overhead from sequential transactions that really do want to be next to each other. And I think something that's not maybe well understood as without diving into this is you actually get better 1559 tx like gas volatility spreading. Because if a user is trying to do multiple actions in a row like approve swap Deposit they can't leave their computer until that final transaction is going out into the world. So they often have to overpay on the first ones they overpay on approve, they overpay on Swap, just so they can Express their actual gas preference on that last bit. So I think there's I think there's a bit of spreading if you could batch three-ish transactions send it at the menpool for a couple hours and sit it there. Then you get better queuing on that front and better State growth. So and then of course there's the risk of intermediaries private relays unbundling things either maliciously or accidentally. I think this makes the relaying space a lot more open and permissionless because there's less need to rely on reputation as collateral. So and trying batch transactions there removes it basically so my strong opinion is that batch transactions for EOAs in some form need to ship in the next hard fork massive upgrade on security on ux on state growth on demand limiting on kind of obscure even obscure things like private relay trust vectors. And seems like the two key EIPs are 3074 from lightclients who's been amazingly helpful in talking through a lot of this. And also a Slimmer simpler but less less fleshed out 5806 two approaches here I think I think most people on the call are familiar with Auth and Auth call. You essentially provide an ecdsa signature to a trusted invoker much like a 4337 Singleton entry point and then that invoker can execute code on your behalf the EOA is done a lot of great work on this and thought through how to mitigate some of the security vectors by signing each specific call. I think that's also an interesting approach. And then 5806 is a just kind of very simple no invokers just a new transaction type that lets EOA's delegate call. Any contract directly and execute that delegate call code within its own context. So obviously these are all exclusive but I think that one I think that one of them needs to get shipped and upgrade the whole EOA experience. So that's a very yeah those are my thoughts from both an from a user perspective massive security upgrade from app developer perspective massive ux upgrade. And I think it adds some benefits to the core chain consensus mechanism as well. So those are my thoughts.

Tim Beiko 38:59: Thanks for sharing any questions on the presentations or write up from anyone on the call. Ansgar?

Ansgar 39:15: Yeah I mean first of all of course great presentation. I just wanted to say that from my point of view kind of looking at specifically the those three kind of potential EIP paths proposed two comments. I would have first of all I think those three specifically are all relatively heavy-handed. I think first of all the alternative proposal by EOA is an alternative to 3074. I think that just has been under explored. I don't personally think that makes a lot of sense and then the other one the HIDs while well I think it's interesting also to think that like the like people have talked a lot about delegate call you accounts directly and it runs into similar issues. So I think out of those three probably 3074 is the most realistic but as we talked about for the last two years already regarding the EIP it's very heavy-handed. So I'm not sure if it's a good fit for mainnet. So I think the most realistic if there is a strong need for banding. I would say the most realistic here would be to have a more new EIP or reuse one of the the older ones that was very limited in scope just to the banding use case specifically. So that's comment one we I think a narrow EIP here would make more sense and then comment two would be that I personally philosophically think that for a lot of these more user focused improvements to the EVM. We should try and more and more kind of leave this initial experimentation with new features like this layer 2s that are more Nimble and shipping anyway. And now that we're starting to see with roll call kind of like a standardization process around EIPs or EIP equivalents on the layer 2 side. I think personally philosophically I would much rather see these kind of things shipped on layer 2s and then later brought to mainnet but those are my two comments.

Foobar 41:06: Yeah that's that's helpful I guess to respond to the first definitely my desire is some form of thing get Shipped. So if there's a way to slim things down use the learnings of the past two years it all sounds very interesting. On the second comment push all the Innovation out to the L2s. I think that's a good long-term plan and works quite well on things like new cryptography pre-compile. But for things where there's a high coordination cost good batch transactions require three actors to all coordinate you need the core consent you need the core chain to support it. You also need the wallets to support it you also need the apps to be able to push this. I'm not sure that the fragmented approach gets the job done. And I do think that in terms of how do you spread the EVM. How do you make this a good default experience. There is some room for real winning pushes still from the coral one itself.

Justin Florentine 42:18: Quick question follow on Ansgar's point though. So you said that the three actors that you kind of need to coordinate one of them being the L1 why would the L2 sequencing their own transaction serve as a substitute.

Foobar 42:32: So yeah if you if you rely on centralized sequencer then you get some of the benefits of like unlikely unlikely to unbundle for example but the user still has to sign multiple sequential transactions. And so on you kind of you train people to click click click instead of carefully inspecting things. And you just you replaced the role of the trusted private relay bundler with the trusted centralized sequencer. And it's not super robust.

Tim Beiko 43:16: William?

William Morriss 43:20: Yes I just wanted to comment that I don't think layer 2 should be treated like a feature testnet the developers making layer 2s care a lot about compatibility with mainnet and if they deployed a feature that had a tentative specification. Then when it finally comes to mainnet even if it has the same opcode number there might be differences in behavior. And those would be technical debt for them so they aren't especially interested in doing that role for us usually. That's all.

Tim Beiko 43:59: Right. Okay so I yeah I think we can probably discuss that on like the L2 call whether or not you know we'd want this in its current form. Yeah like you know this in its current form or potentially a tweak of it to be deployed on L2s versus L1s and yeah okay I'm just reading George's comment. Yeah, is there like a clear next step or that people would want to provide here.

Foobar 44:47: I mean this is an Outsider's perspective but it seems maybe some of the confusion comes from two separate questions being jumbled. The first question being like are EOA batch transactions desirable and then the second one being what's the right implementation for that and you've got I think a lot of fragmented views on the second bit but less Clarity on the first one. So it maybe that's the key thing to answer first.

Tim Beiko 45:24: Yeah and I don't know if anyone has an answer right now at the top of your head. Oh Ahmad?

Ahmad Bitar 45:37: Yeah so looking at the other solution that Foobar talked about seems to be harder to secure than the current implementation of 3074 as you can't really parse a delegate call easily when it comes inside the wallet whereas with 3074 you can parse the call data the address the amount of gas being used the ethereum you're going to pay Etc. So it's easier to reason about for a wallet to implement something for the user to actually look at when they're signing this transaction. Now looking into the L2 perspective my personal opinion is that I agree with William L2s are not going to introduce this EIP unless there is some kind of commitment that this EIP will eventually land in mainnet as it is spec without any changes and that's just my personal opinion.

Tim Beiko 47:01: Thanks. Andrew?

Andrew 47:04: So I think we like there was a concern from Aragon team members about the original if 3074 that the authorization is not revocable. So we are opposed to the that version of 3074 but the other version by EOA why vice where the permission are revocable or like EIP 5806 seem to be more reasonable.

Lightclient 47:39: To be clear 3074 is now revocable as of about one month ago we added the nonce to the authorization message so when that non is passed the the authorization is no longer eligible to be off-call.

Andrew 47:55: Oh I see.

Danno Ferrin 47:56: I would call that expirable not revocable because you said an expiration revocable is like I changed my mind and nothing changes but it's revoked.

Lightclient 48:04: We don't have to be nit pick the exact definition here.

Danno Ferrin 48:09: I would not call it revocable however.

Lightclient 48:11: Okay I disagree.

Tim Beiko 48:15: And Ben?

Ben Adams 48:18: In answer to Foobar's first question on whether batch transactions from the users are desirable? I'd say definitely because it's not having them means we have some pretty weird ux patterns that generally end up being quite unsafe. And as he mentioned you then end up where you know if you've got to do 3- 4 transactions for just to do one action you're sort of training users to blindly go yep yep yep yep in the wallet and then they go to a phishing site and they got multiple transactions coming through and they go yep yep you know and they fall into this bad pattern.

Foobar 49:09: And also I think it's important to note that there's in the absence of batch transactions apps have developed even more unsafe approaches. I'd say the status quote today is something like a sea fork order that can it's not even a transaction. It's an EIP 712 typed arbitrarily nested array that gets people wiped 50th at a time. And it's not even simulatable. You've got the same issues with things like a single malicious permit 2 signature can then take all the assets you've approved in Unis swap. So what enshrining batch transactions and the protocol does it doesn't introduce new security risks. It moves it from these opaque 712 nonce simulatable messages into actual simulation core into things that wallets can simulate. So batch transactions are already here in an unsafe way. The question is can you improve them and intrine them. So that we're not stuck in a blur / permit 2 world forever.

Ben Adams 50:26: Yeah follow up on that the way you do an offchain signature which was sort of like a an intermediary solution that's like a very dangerous pattern. And it's offchain and it lives there forever and somebody could just insert the your signature at a later. That they're choosing which has caused a bunch of hacks as well.

Tim Beiko 50:51: Just yeah to be mindful of time. So I think yeah this definitely provides context to a lot of the L1 teams. And also I think shows that maybe some of them aren't up to speed on like the latest designs for this stuff so obviously there's like the three EIPs that people can review. I think in terms of next steps one like now you know L1 teams can figure out how they feel about 3074 or one of the other Alternatives. And you know whether we want to move forward with something like that in the next Fork two. I think there is a discussion to be had on the L2 side about whether or not we'd want to implement 3074 or again some version of it first there. And you know Ansgar has a comment around how they are trying to figure out the right framework to do that. I guess if people have questions or concerns like we can use is the like eth magicians threat about 3074 still the best place to do that as people sort of review it.

Lightclient 52:09: Yeah that's a good place. Also one of the channels on Discord any of the transaction batching ux channels just feel free to tag me I'll respond.

Tim Beiko 52:23: Got it. Oh I'm not saying we're necessarily looking for extra feedback Georgios but just if some L1 teams like review this more deeply like in the next week or so or whatever. And they have questions then yeah we can use both the channel on the R&D Discord or eth magicians as the place to discuss those. Yeah I think it probably makes sense to move on to the next ones at this point though. And yeah thanks again Foobar for coming on and sharing both the write up and the slides.

Foorbar 53:09: Yeah thank you.

Lightclient 53:14: I just want to say one last thing on this like there is a lot of demand from the user space for something to improve the ux on L1. So if 3074 isn't the path forward it would be great for client teams to think a bit about this over the next couple weeks. So we can try and come up with a proposal that we can put into the next Fork.

SETCODE alternative designs

Tim Beiko 53:37: Yeah that makes sense. Okay next up SETCODE we had another I guess design overview William? Okay.

William Morriss 53:54: Yeah can everyone hear me can everyone see the presentation? All right, cool. I hope this goes fast. So the common basis for all the design of SETCODE. we're just modifying the code at the address That's executing. So code copy and code size still refer to the executing code EXTCODECOPY/ EXTCODESIZE/ EXTCODEHASH refered to the stored code. It's similar to delegate call in that regard. So new call Scopes will execute the storage code so the storage effectively happens immediately. And storages can be reverted like SSTORE. And it fails inside of create context and it doesn't clear storage. These are the basic parameters of setcode but there's a few alternative designs that have come up in the process of designing setcode. The most basic set code would have no restrictions it's a simple design. It would work the same way as SSTORE. STORE also has only the static call restriction. And so it would be difficult however to identify immutable code and someone identified that it could have an issue with ERC 4337 because if a library that all of the validator step we using for the alt mempool were to be toggled then it could thrash that mempool and be a DoS Vector. So with his help developed a newer design that adds a restriction the Restriction is that you can only modify the currently executing code if it's your code and so this allows it to be easy to prove that a contract is immutable. It' be easy for a wallet to know if it is approving a mutable contract. It would be easy for ether scan to flag mutable code. Also immutability it can't be faked during execution so you wouldn't be able to pretend during the scope of a transaction that you're immutable get validated for some registry and then change to mutable. So that's a nice property too and then also it would prevent setcode inside a delegate call which would address the alt Mempool DoS vector. And so this may be a more secure design and it's probably a good restriction that could be removed if it was determined say safe to do so. But probably a good way to launch and the last parameter for the design of setcode was is that we could instead of setting the code at the account instead we could have the code be an immutable blob per account. And then have the accounts have a new field the code address that would allow them to indirect the Blob that is their code. And then there'd be opcodes to set the code address to read the code address. And then so this is some other way to implement it that I'm interested in knowing how good you think this is. And how compatible the code address spec is with Verkle. Because when you do Verkle trees you're likely going to cement this field in. And so either the code address would be mutable or the code would be mutable. It either would be satisfactory for my purposes. The current specification is the belongs to specification where you can only change your code. And then it's a nice spec because it implicitly captures the delegate call and create scenarios and it's pretty easy to know whether you're allowed to SETCODE. Other designs were slightly more complicated. Again the reason we need to do this is that account abstraction can't DELEGATECALL if it wants to be competitive for DEX protocols. Because of priority gas auctions. So if you have a proxy behind delegate call and SL load then you're not going to be able to compete for Uniswap V4 you're going to be super late to any trade. And you're going to have a systematic disadvantage. And so if you're trading on DEXes things like 4337 aren't really viable whereas that's why MEV Bots are using the create to self-destruct pattern for their upgrades. It's because it's a really good user experience and I would like to preserve that user experience. So that I don't have to move all of my assets to a new account every time I want to support a new callback. That's all for this presentation I defer back to you.

Tim Beiko 59:29: Thanks William. Any questions Marius, I see you have a comment in the chat around breaking gas estimation. So I don't know William, if you maybe want to answer that first and then if there's other questions people can raise their hands.

William 59:45: Gas estimation is already broken by eStore for the same reason. You can Branch according to code just the way same way you can branch Branch according to storage. And so any change at all actually impacts gas estimation. And this is one of the main uses of alt menpools and relays such as flash Bots.

Tim Beiko 1:00:13: Got it. Any other questions comments? Oh Guillaume?

Guillaume 1:00:22: Yeah so there was a question asked about the feasibility in Verkle of the third option. I mean I see no reason why this would not work. However I mean on a personal note I don't really see how this whole setcode story could end well. There's so many potential hacks with this. Yeah I would not consider that in the Verkle tree spec for sure but I don't think we should consider that period.

Tim Beiko 1:00:56: Got it. And okay there's a specific question by lightclients around if we can do this with delegatecall already why do we need setcode.

William 1:01:14: Yes. So Delegatecall is unviable if you're trading on dexes because of priority gas auctions. So your ability to trade on a DEX if you're competing with them for storage. As you're currently saying this is a DoS Vector for nodes. It's the same if you're trying to trade and some only one of those trades can win, the one that uses the least gas is going to be the one with a huge advantage. That's because they're going to be able to bid higher and get the trade.

Tim Beiko 1:01:56: Got it. Okay and there's the last question.

WIlliam 1:02:04: Marius has a question says so this is only about the gas usage of delegatecall. No I don't think that's all it's having used the self-destruct upgrade pattern even when you have to wait a whole transaction. I see it super priority it was so good. If you don't need to use storage you've been using this already and dozens of power users have been upgrading this way. It's just it saves so much gas per transaction. It I don't think there's a way to price delegate call that would fix this. I think that not having a delegate call at all is just a huge advantage.

Marius 1:02:59: I'm so I think this proposal will backfire quite badly for most users. And because people will create gotcha contracts where they will be able to set the code. And then break it. So I think it's a negative for most people and maybe there's an advantage for the 15 or the handful of power users that use this. But I don't think we should we should do this for these people and potentially bring normal user at risk with this.

William 1:03:42: I don't see it as that demographic rather that only a handful will be able to keep trading on dexes if we don't allow everyone to do it this way. Currently it's there's a huge cost to migrating all of your stuff. And you're only able to do that if you're a high frequency Trader otherwise you're just going to suffer or go back to just using the one by one transaction method described. I do agree with some of the sympathies in the comments regarding how we just need better account abstraction that is true.

EL/CL EIPs: EIP-7251 , EIP-7547

Tim Beiko 1:04:18: Okay this is probably a good place to wrap this one up and again. I think if this is something client teams want to see in the fork they can definitely signal for it. Yeah thanks a lot William for coming in. Okay last one before we get into what client teams want into the forks. Mike wanted to talk about two CL EIPs that would have implications on the EL side. And so to have people be aware of those and what it would mean. I guess you know first the value it would bring. And then second what it would mean in terms of work on the EL side if these were to happen. Yeah EIP 7251 7547 Mike do you want to give an Overview?

Mikeneuder 1:05:14: Yeah! Hey thanks Tim. And I'll try and be super quick. I know there's still some left on the agenda. So I guess I'll give a quick overview for the two EIPs talk about how they relate to the execution layer and then you know if there's time to field a few questions happy to do those. Otherwise you know please reach out offline. I should be pretty easy to connect to. So yeah I guess both the EIPs I just sent in the chat the first one is EIP-7251 which we've kind of been talking about for a while but it's this idea of increasing the max effective balance. So just super tldr the current max effective balance is 32 eth which means both the minimum balance to become a validator is 32 eth and the maximum balance that a validator can have is 32 eth. This is a few implications on the on the big staking pool side of things it means that they have to essentially spin up many many validators to account for all the stake that they want to put into the consensus layer. So just for an example coinbase has like 120,000 or or more validators currently that are kind of controlled by the single entity. So they're almost redundant in that regard. On the small Staker side this means that any stake you have above 32 eth automatically gets withdrawn and doesn't like earn compounding rewards until you have enough to deploy another validator. So a whole new 32 eth. So generally speaking increasing the the maximum you can have while also keeping the minimum at 32 both benefits the large stakers because it allows them to consolidate and this this has like overall Network help implications and it also helps the small stakers in so far as they can have balance that's above 32 eth but not 64 eth. And they can get access to this automatic compounding. So yeah that's kind of where I'll leave it in terms of the summary there's a lot more written on this here there's a kind of related work section in this gist. In terms of the execution layer the way this relates is through EIP 7002 which asks that the execution layer withdrawal credential which is kind of different than the signing key that's used for the consensus layer can initiate an exit. So beyond this the one important feature of 7251 would be to allow custom withdrawals that aren't necessarily full exits also triggered from the execution layer. So the kind of use case here is if you have a validator with 100 eth you want to withdraw 2eth to pay your taxes or whatever then you can do so without withdrawing the whole 100 eth validator. And that that request for that withdraw could come from the withdraw credential rather than from the validator signing Key. And cool I'll just run quickly into 7547 the the tldr here is that this EIP aims to enshrine inclusion lists in the protocol. The motivation for this can be seen on Tony's awesome dashboard censorship pix which basically shows that because of the world we live in terms of MEV boost and block building. A huge number of blocks on ethereum mainnet are built by a very small number of entities and those entities usually choose to exclude some set of Transactions. So the EIP itself is this kind of forced inclusion mechanism where validators even if they aren't building the block for their slot they can ensure that some set of transactions will get included. There's a extensive research on this mostly done by vitalic and Franchesco over the past like three years. The latest design is is this no free lunch thing so this has like a related work section and and all the details you could want but yeah the implication on the execution layer side is that the block verification would now depend on this inclusion list. So the block would only be valid if it included the transactions that were specified by the inclusion list otherwise it should be ignored and considered invalid. So yeah those are this is my high level pitch. Again these are mostly on the consensus layer side but just wanted to kind of make sure that this audience was aware of these and kind of make sure that any questions that people have like you know a way to get in touch with me and there's a venue for that. So yeah thanks thanks for giving me the time Tim. Happy to take some questions now or if we should move on to the client team discussion. Happy to do that as well.

Tim Beiko 1:09:50: We can take a couple of questions now if there's time. And if there are questions? Ahmad?

Ahmad 1:10:02: So one question about increasing the mass effective max effective balance is how does increasing it affect the inactivity leak in case there is a non-finalization in the network. Since the increase in the the balance of the validator will mean mean that it will be able to attest for or not attest for longer and still affect the network in a higher.

Mikeneuder 1:10:36: Yeah so the leak would still be proportional to the balance that you have. So if you're like a 32 eth validator you leak at some percentage per year. And if you're a 2048 eth validator you leak at that same rate.

Ahmad 1:10:53: That's right. I just remembered the equation never mind that's actually valid what you're saying thank you.

Tim Beiko 1:11:04: Cool thanks any other questions?

Mikeneuder 1:11:13: I'm just looking at lightclients how complete is the is spec last time I looked which will only be so effective because of stake consolidation. How does Stake consolidation affect this?

Lightclient 1:11:26: I just mean if you're coinbase and you're building four blocks in a row you're only going to use non-empty ILS. Whenever it's not you you building the next block if for some reason you're not allowed to include transactions in your blocks that are considered something that should be Censored.

Mikeneuder 1:11:47: Right so I guess this is a little speculative because we're not sure exactly how both the validator side decision process would be and also the Builder side decision process right. So yeah I don't want to speculate on what they would do. I guess in general there's Tony wrote a piece I don't have it on the top but Tony wrote a piece called cumulative non-expiring inclusion lists which kind of addresses this and it makes it so that you could kind of specify how far in the future this thing could be enforced. I'm not sure about the necessity of that versus the complexity it adds but it's definitely part of the discussion.

Lightclient 1:12:30: Yeah that makes sense. I think this is something we should be looking pretty closely at. You know if not for this Fork then hopefully for fork soon. So yeah that's generally my thoughts.

Mikeneudert 1:12:48: Yeah and we're kind of in this slightly delicate situation where a number of the large builders have started censoring a subset of transactions there is a few builders that are not. If those Builders decide to start censoring then it could quickly turn into both like a kind of bad look mathematically if 90+ % of blocks are censored. And also like legit degradation in ux in terms of getting those transactions in. So right now I would say with with you know the situation is is very unstable and the more I hear from Builders and validators the more I think that getting some enshrined inclusion mechanism feels very important. Especially as regulatory stuff continues to be uncertain.

Lightclient 1:13:39: Yeah it would be better to be proactive with this than reactive in the event of censorship like seriously impacting users.

MIkeneuder 1:13:50: Absolutely yeah.

Tim Beiko 1:13:55: Okay I think yeah we could probably wrap up here unless there's any urgent questions. Thank you Mike for sharing all that. Okay so I think yeah the next big chunk and this might consume the rest of the call is I guess given all this context and what we've discussed on the previous ACDE and just like the general setup for proposals. It'd be good to hear from client teams you know what they think should be prioritised in the next Fork where they have the strongest opinions. And I think as part of that probably the biggest question is how we feel about Verkle. We'll focus more of the next call going deep into Verkle but I think yeah from a high level perspective figuring out do we want to do Verkle right now. Do we potentially want to do another fork with another set of EIPs prior to Verkle. And if so which EIPs are most urgent to include there that seems like the biggest question to answer. So yeah if any team wants to kick it off Georgios just shared I guess. Yeah maybe Georgios you know you had this write up already. Do you want to give a quick overview of that and then yeah others can chime in after that.

Geogios 1:15:23: Yeah thank you Tim. So we I shared it write up in the chat the tldr that I think is that momentum is very important and we have a good Jive right now of shipping things. So I think and there the rest team view that we should be able to ship something in 2024. I think we should optimize for shipping something in 2024. And to that extent I think that the most important EIPs to ship are the staging related ones that Mike mentioned in particular 7002. And beyond that I think we should be shooting for isolated one person jobs on the EVM side. So EOF I know it's a painful topic for some we took a close look. We think it's doable in you know by one person in a couple months other ideas that we are interested in are increasing the max blobs post Cancun. But I think we need to do a lot of data work before we can see this. So we don't really want to push for anything like that. So our strongman would be 7002 there was another EIP about introducing deposits in the EL state which was good for the CL. The BLS curve EIP I think everybody knows how to use bls12381 by now. It's literally everywhere I don't see any argument against a let's discuss on scope and commit on something and yeah that's where we're at and given that we have it written down I think worth giving time for other teams to speak more. I'm happy to discuss as think.

Tim Beiko 1:17:18: Thank you yeah that's a really good starting point. Any other team have thoughts or perspective you want to share.

Andrew 1:17:29: Yeah I can talk about Aragon's perspective. I think that we should definitely deliver big serious fundamental improvements that require a lot of engineering. Because I believe that ethereum has the strongest engineering talentful among all blockchains. And if we don't do difficult things then maybe nobody will. So I think we should deliver Verkle and EOF. The main question which we how we schedule whether we schedule Verkle first and then EOF or the other way around. I think Guillaume has a point that Verkle is already is an epic upgrade and EOF can potentially complicate it further. So perhaps it makes sense to deliver Verkle first and then EOF and then like once we agree on on that that order we can maybe bolt on some relatively isolated things on say on if we deliver Verkle on the first upgrade maybe say potential. I think that tackling bad batch transactions is important so say we decide that we are happy with the 5806. So then we can take can potentially Bolt on 5806 on the first upgrade say Verkle.

Tim Beikot 1:19:09: Got it thank you. Any other team?

Marek 1:19:15: So nethermind we would like to see small fork with EIPs such as execution layer exit BLS precompile Supply validator deposits. Yeah and after small Fork we would like to focus on Verkle trees and additionally we would slightly prefer to focus on EIP for Fork the EOF if all teams have a capacity to do it and also we would be happy to implement any account abstraction EIP if there's clear decision what EIPs. We want to implement for account abstraction.

Tim Beiko 1:20:00: Got it. Thank you.

Marius 1:20:04: I cannot really speak for the team but from what I would like to see is I would love to see Verkle first but I don't think that realistically don't think that the majority is for Verkle first. So I would also go with the nethermind approach of doing a smaller hard fork first and then while working on Verkle and the smaller hard fork in my opinion should contain 7002 6110 2537 and another one that I don't remember. But like basically the validator deposits and exits EL deposits and exits and BLS precompiled oh the SEC p 256k R1 as well. I think that would be a small set for a small fork and we will have time for building Verkle.

Guillaume 1:21:26: Just to add to this I think actually there was some push back on 6110 but yeah I think everybody else in the team that I'm aware of at least is for a small Fork before Verkle.

Tim Beiko 1:21:41: Got it.

Matt Nelson 1:21:44: Yeah the the Besu team here has very similar thoughts as Marius / and the nethermind team focusing first on a smaller Fork with a lot of the same EIPs like the BLS precompile the SEC 256 R1 precompile the 6110 you know 7002 focusing on some EL related deposit stuff. And then you know between a driver of EOF or as nethermind stated something like 4444s. It's kind of loosely you know kind of a toss up there I would kind of defer to others on this I know some on the Besu maintainers have strong opinions about EOF. So I'm speaking mostly for subset of maintainers here.

Danno Ferrin 1:22:31: Yeah I'll speak for EOF sorry for not being on last week's call it's on a vacation. So I guess the voice wasn't loud enough there but as far as EOF and all these other EL deposit level ones. It's important to note that the engineering commitment is fairly orthagonal the engineers that know the most about EOF on the client teams are not necessarily ones that know the most about the consensus layer changes. So you know one of the concerns that we're going to be splitting engineering effort I don't think is quite the same because as Georgios pointed out. It's very few engineer times because we already had this mostly functioning with nethermind Besu and Geth in January of last year and they're mostly iterative changes. So the hard work's been done for EOF. So that's my opinion on the o and the other changes proposed for I'm in favor of the of the of the fork before Verkle and the other scoping sounds reasonable to me.

Tim Beiko 1:23:30: I think that was all the EL teams but did I miss anyone? Oh Lukasz?

Lukasz 1:23:37: So I would add quickly. So our concerns for EOF are mostly on the testing side because this is very Consensus issue problematic like easy to make a consensus issue. So I would really love to hear from the hive testing team what they think about EOF.

Mario Vega 1:24:06: Hey yes I think we can make, we have some pretty good ideas on how to do EOF and also Verkle which is just to introduce more testing fixture formats for EOF specifically. So what we can do is now we have the Execution spectus. We can generate a definition of how the EOF can fail for example. And do this testing across all the all the teams by doing the fixture format for this specifically. So we that I think that can deviate a lot of the testing burden on EOF. And also on Verkle. We're planning doing the same thing for this complex upgrades on the EL side.

Tim Beiko 1:24:54: Yeah and there's a couple comments on EOF in the chat around a finalized Spec. So I don't know if anyone who's familiar with just like the latest spec work that can give an update.

Danno Ferrin 1:25:10: So the spec as it's written today could be closed. We're just mostly arguing over smaller you know maybe one more opcode here. You know how are we going to do the final nibble integration of the exchange opcode. There is a push to move everything to variable width but I think that's come a little too late in the process. You know because that would basically be restarting the spec and doing large things. So what we have written in the EOF repository you know modulo stuff found in testing could ship a EOF. If we would want to do a final discussion with the solidity and Viper Engineers are on the call but it doesn't seem like it's a very long list at all. So it could be done in a week or three.

Tim Beiko 1:25:55: Got it. Guillaume?

Guillaume 1:25:58: Yeah so just maybe a reminder. The problem in this deciding which one goes first is really about the complexity of Verkle and the fact that the more we wait the longer the transition is going to take. So yeah if you start adding EOF before Verklel well first it's going to complexify Verkle because now you have two types of contracts to handle. And Now with Verkle the code itself ends up in the tree and is going to be treated differently right. So you're going to make like increase the complexity of Verkle unnecessarily and on top of that you're going to make the conversion much low much slower and much more risky because it will be pushed back by as much time as you need to deliver EOF. Which is not a simple change. This being said we in Verkle have to go through the state and convert everything. So if and Pavo is supposed to get back to us on that. If it would be possible to convert all the Legacy code what during the Verkle conversion then I would say let's do EOF first because then we can turn everything to EOF and forget about Legacy code. It's not guaranteed that it's possible in fact it's very unlikely to be possible but if it's possible then it's totally worth waiting but otherwise there's no good reason to do EOF first because yeah like I said the state is growing we have a very difficult problem ahead of us. We need to tackle it first. EOF is really nice but it's just that it's nice to have. It's not like a catastrophic problem that we're going to face sooner or later.

Danno Ferrin 1:27:51: So once specific problem that EOF solves with its structure and validation is the whole notion of uncapping code size there's a lot of push on Twitter to increase code size from 24K. And that cannot safely be done in Legacy code where whereas it can be done safely in EOF code and another problem is some of the goals of EOF code are to explicitly break things that Legacy does that we don't want in the future such as code visibility. We don't want to be able to look at the executing code as part of the execution modify and manipulate it. We want to separate that code inspection out and also we're taking the opportunity to remove some of the gas observability issues that are observed in Prior versions of the Legacy code. So one of the points of EOF is to break these things that have caused problems and get rid of them. So that in EOF World it can't be done. So because of that you can't translate all Legacy code into EOF code could probably do most of it but there's some explicit places where they're you know depending upon these features and those we just can't translate there now the longer that we wait to to put EOF in there the longer that we have to deal with more Legacy code coming in in fractured forms such as the diamond pattern versus looking in the safer ways to uncap code size and get you know these these app developers are begging for just not as loudly as the defi people for things like getting rid of you know revocations but there's a lot of desire to get to increase the 24K limit and we need a better validation system before we can do that safely.

Tim Beiko 1:29:18: Okay so we're basically at time. So I guess maybe summarize a couple things that we've heard so far one it seems like everyone except potentially Aragon would want to see like a small Fork before Verkle and then Aragon's view is like if we do a fork before Verkle it should include something substantial not just a bunch of small things so we should do hard engineering work. I think in that context the three EIPs that everybody seemed on the same page about were 6110, 7002. So the two validator related EIPs and then the BLS pre-compiled and then whether we move forward with Verkle or EOF or potentially something like 4444s even though it's not quite a consensus change but have that you know be like the quote unquote hard engineering thing we do in parallel to this the small EIPs that seems a bit more consensus. So would it make sense to move forward with those three EIPs. So 6110, 7002 and then the BLS precompiled as like the basis for our small Fork that's targeted for sometime next year on the next call spend most of the call discussing the big three proposals. So EOF, Verkle and 4444s and then figure out you know which Fork they would sort of fall into. So you know agree that like there will be a small thing likely before Verkle and then the sort of minimum set that we have consensus on. So far is like 6110, 7002 and 2537 and then discuss the three big things on the next call. And see how they can fit in does that make sense to people. I guess any objections to this.

Andrew 1:31:42: Well. I have one question so I think like there is some opposition to EOF in principle. So maybe okay maybe we can decide on the next call but I would like to hear whether people agree that EOF is a good idea in principle or not?

Tim Beiko 1:32:09: Right yeah and I think okay it's probably worth Galen to that on the next call and I think in the case where we did not want to move forward with EOF and we wanted to have a separate Fork before Verkle. The other thing we could spend extra engineering resources on is for 4444 like history expiry right which doesn't actually require a hard Fork but it does require a hard work by client teams. Does that seem reasonable? I guess yeah and maybe the thing I'm trying to say is it seems like everybody's like in favor of the three small EIPs there's still some questions around what the big ones are but at least we can you know set those three and then discuss the big ones more thoroughly on the next call. Oh Potuz? Okay we're already a couple minutes over time. So I'll open just like a draft PR for for that and again we can continue the discussion next cal. And I think we didn't quite have time to get to the the point about Verkle but one thing that would be valuable is if people can share any questions concerns that they'd like to discuss around either EOF, Verkle or 4444 either on All Core Devs or on the agenda for the next call. So that the folks who are championing those can can take them into account yeah and we can discuss later. And I know we also didn't quite get to the Besu incident Matt would you be okay if we move that to the start of the next call. So that we're sure we actually cover it. Cool okay so let's do that yeah. Any final closing thoughts or comments before we wrap up.

Potuz 1:34:26: Can I just say something quickly. Good so just two two unrelated comments one of them is my only objection to your your statement was that to ship to next year and the reason being that many teams may have waited in their decisions to have a small Fork. The idea that this Fork will actually be small and will not delay Verkle. If we knew that we would be shipping for 2025 then perhaps teams would have decided in a different way. So I would commit to targeting at least by the end of this year. And the second comment is a quick shout out to the Reth team. I think they set out a president that I hadn't seen before perhaps once from the Geth team which is put out a detailed before the meeting a detailed reasoning for all the EIPs that they want to be included and which ones they don't want to be included and which ones they're to be discussed. This was late for us we're planning to reach that internal consensus on prysm and actually have this ready for the next meeting. This would be very good if everyone does the same in the next time so this is just a quick shout out for them.

Tim Beiko 1:35:38: Yeah I agreed is extremely valuable. And also yeah I didn't mention the CL called but I assume you'll all have a similar discussion next week on your side of things. Yeah I think this is a good place to end. So yeah I'll put the PR up for those three small EIPs we can discuss the three big ones on the next call. And yeah talk to you all soon. Thanks everyone. Bye.

Attendees

  • Georgios Konstantop
  • Pooja Ranjan
  • Tim Beiko
  • Foobar
  • Anders Holmbjerg
  • Naman Garg
  • Ben Adams
  • Maurius Van DerWIjden
  • Guillaume
  • Justin Traglia
  • Terence
  • Danno Ferrin
  • Paritosh
  • Ignacio
  • Barnabas Busa
  • Joshua Rudolf
  • Matt Nelson
  • Kolby Moroz Liebl
  • Protolambda
  • Alto
  • Ben Edgington
  • Enrico Del Fante
  • Maintainer.eth
  • Ansgar Dietrichs
  • Roman
  • Pawan
  • Dhananjay
  • Danny
  • James He
  • Trent
  • Lightclient
  • Vasilly Shapovalov
  • Ameziane Hamlat
  • Lukasz Rozmej
  • Kaesy
  • Mikhail Kalinin
  • Spencer -tb
  • Nick Gheorghita
  • Carl Beekhuizen
  • Hasio-Wei Wang
  • Raneet
  • Dankrad Feist
  • Stokes
  • Mario Vega
  • Echo
  • Gajinder
  • Andrew Ashikhmin
  • Karim T.
  • Marek

Next Meeting Date/Time: Feb 1, 2024, 14:00-15:30 UTC