S. No. | Agenda | Summary |
---|---|---|
142.1 | Pectra-devnet-3 | It has been launched and generally going well. |
Deployed a “bad block” fuzzer which surfaced some bugs; relevant teams are debugging | ||
142.2 | Pectra Fork Scope | Discussions on managing the scope of the Pectra fork led to two key decisions -- |
1. Focus on the EIPs currently deployed to pectra-devnet-3 for the next hard fork. | ||
2. Postpone decisions on the scope of the fork after Pectra. | ||
142.3 | Timeline | The timeline from devnet-3 to mainnet is a few months, moving towards a ‘spec freeze’ for Pectra. |
142.4 | Future Forks: | Less consensus on the scope of the fork after Pectra, with candidates like EOF and PeerDAS, but uncertainty around features like Verkle and EIP-7688. |
142.5 | Open PRs: | Reviewed PRs for polishing devnet-3 features, focusing on mitigating potential DoS issues and attestation refactoring. |
142.6 | PeerDAS and Blob Parameters: | Ongoing work on PeerDAS devnets and discussions on raising blob parameters, with consensus leaning towards raising the blob target in Pectra, pending further analysis |
Alex Stokes 1:16: Okay we should be live. I assume the chat on YouTube will let us know if there are any issues but yeah let's go ahead and get started. So to kick us off we have the agenda here. This is issue 1154 in the PM repo. This is Consensus Layer call 142. Thank you everyone for joining and yeah there's quite a bit to get through today. So let's just go ahead and dive in so to get started let's think about pectra and perhaps we can just quickly go through any updates for devnet 3. Devnet 3 has launched which is super cool. I think we had some bad block testing that we did which caused a little turbulence and it' be interesting to hear an update from the results of that anyone have anything like they like to share?
Paritosh 2:11: Yeah we had Marius's bad block generator and mainly the chaos seems to be limited to Besu. Is someone from the Besu team around? Maybe give an update.
Daniel Lehrner (Besu) 2:23: I can give an update yet. So the fuzzing yesterday created signature that were zero in Hexadecimal 0x00. Besu has a rule internally that the signature has to be at least one. So we reject any transactions with those signatures. I'm currently updating this so have open the PR to fix the issue and to accept the Signatures. As once this yeah that's fixed I think should be able to join again devnet 3.
Starting point: Pectra scheduling into two forks
Alex Stokes 3:12: Cool. Any other Devnet 3 issues otherwise I was looking the other day it looked pretty healthy so that was exciting to see by. Okay.Cool. Then we will move to the next item on the agenda and yes this is discussing the split of pectra into possibly two forks. We had discussed this on last week's ACDE and kind of had asked at least for everyone to have a view on their preference for how to split this into two. For reasons that were given there. And yeah I guess we'll just go ahead and open the floor. Yeah Etan put this in the chat. Thank you. So I had a document here that was essentially one way to split up what we have now. My thinking was essentially I mean I guess where to start is like Pectra again is like one of the largest Forks that we have scheduled as it currently is ever in ethereum's history. So for many reasons around reducing risk and just you know overall success of actually executing a hard Fork it makes a lot sense to split it into two. I think also given just development progress of other features. We've been discussing for Pectra like PeerDAS or EOF. There's like a pretty natural split between sort of what's on devnet 3 today and the other things we've been discussing. So yeah I guess maybe to get us started with the conversation here. Does anyone disagree with what I write on the note? Is there like a different proposal that anyone would like to discuss are there EIPs. Maybe that you would want to see in like this first pectra that aren't in this note. Happy to hear what people think? Yeah Andrew.
Andrew Ashikhmin 5:19: Right so we have an Eragon’s perspective on the vector split. I'm basically I'm posting the write up it was written by our team member Somnath. Somnath are on the call maybe not and I can if not I can Briefly summarize our position. So we basically were against it and there are a few reasons. So on the face of it makes sense but essentially we worry that splitting pectra into two will lead to two fully blown Forks and people will use it as an opportunity to include more EIPs into the second Fork. There might be interactions between various EIPs of the two forks and that will introduce a lot of interference and last made decisions and we also want to get our priorities straight. So if we think that verkle is important then we should prioritize verkle rather than kind of you other random EIPs and also if we try to like do if we Implement many EIPs it might be successful in some sense but it actually also detracts resources from our engineering resources on working on other improvements to the clients like tackling things like State growth or code deficiency or and so on. So trying to cram as much if into say two forks it may be not exactly a success story. And basically we propose to keep EOF in pectra given that actually there is a lot of momentum on EOF. I'm thinking that like if peerDAS is not ready we can have a CO only fork or if peerDAS requires some trivial changes on the side that should be fine because it won't hinder our progress on verkle much but yeah I would rather I agree with with somath and I would rather avoid pectra too.
Alex Stokes 8:27: Yeah I mean I think that's the key risk here is like we would say okay pectra is Big we'll split it into two. And then we essentially don't have the discipline to keep the second part as is. And then yeah it becomes a whole separate Fork there's a delay you know with that just necessarily and that's not great. So I don't know like the way I would mitigate that risk is we just commit even today to saying like we will break it into two. But essentially there's not any room for like new features. You know if there's some EIP that like compliments something In Pectra A or Pectra B. Then you know sure that's those would make sense but the point is not to just push everything to a second fork or you know push some things to a second fork and then open the can of worms of you know hard Fork scheduling again in say six months. Any other thoughts? Yeah Etan you done?
Etan (Nimbus) 9:49: Personally I'm also in favor of the split. Just because like I'm looking at it from a peerDAS perspective and at this stage it seems like that it got descoped so much that it's essentially like the subnet DAS but the DAS part is also removed. Now so it's like just splitting the data among nodes and hoping that it sort of works out with the availability without any sampling. So I'm not sure I mean not not fully removed but yeah it just appears to me that it's not ready as all the other features that have already been implemented like Geth attestations validator requests, MaxEB. I don't think that delaying all of that which is beneficial for users today is warranting to wait until both peerDAS and EOF are fully ready and fully better tested.
Alex Stokes 11:00: Yeah I mean I think that's the justification for the two forks and then just to surface several comments in the chat even like Ansgar’s here that is the last message essentially it's just like hey if we like this will this will be most successful if we commit to like a very strict scoping and so that really means like yeah there's no room for like it's not that we are picking pectra then we're going to pick the next fork and then now we can decide what goes in the next fork it's like kind of acknowledging we've already decided that we have two hard Forks worth of EIPs. And this is what we showed. Yeah Potuz?
Potuz 11:37: Yeah so I don't agree with like being strict on the scope and be even if I did agree I don't even trust that we would be strict on this 12. We set to scope pectra very early much wait a few months before we went to Kenya and we were very clear that we wanted to have a very small Fork. Because it didn't need to delay Verkle. That was the commitment that was an agreement that we went all teams went at async and proposed a list of changes. Prysm didn't even favor maxEB on the first change and then we have this monster Fork that now we're trying to split. If if we were about to have this large Fork then we would have pushed for a very different set of EIPs. So I find it not unfair I find it just simply wrong to already be committed for the next two forks. Because this Fork actually is in for any practical matters at two Fork.
Alex Stokes 12:47: Saulius?
Saulius 12:50: Yes. I think I I kind in line my thinking kind of in line with Potuz. And I think we why we can just think slightly differently we can just say look we have these features from the list that are quite ready and there are a couple of features that doesn't look that will be ready very soon and we will delay and let's ship these features in the next fork. I mean for myself this thinking that it's possible to split the pectra into two pieces and the second piece will be exactly the same as we think now. I think it's a bit unrealistic. So if we just change the thinking where we just simply say that look there are a couple of features that needs to be pushed to the next hard fork and that's all so that that would be my idea.
Alex Stokes 14:04: Right and yeah just to chime what Marius is saying here in the chat. Like there's at least two things here. One of them is just should we focus on Pectra being essentially the devnet 3 set of features. We work over the next couple months to like get everything polished start thinking about testnet timing on the way to mainnet. The other thing is okay yeah the scope of say the second Fork. I hear everyone that yeah it could be tricky to not want to put new things in. I would lean towards again keeping the scope very small just because then that's going to maximize our chances of actually shipping a second Fork very quickly with respect to this first one say pectra A. So that all being said let's maybe take these separately. So is anyone opposed to saying that the next hard Fork call it Pectra A or even just Pectra will essentially be the Devnet 3 set of EIPs. I think there's General agreement on that point but correct me if I'm wrong.
Marius 15:07: I'm very much in favour of that I do. I think Yeah I think Erigon was the only team voicing their opinions that we should split the Fork.
Alex Stokes 15:25: Gotcha! Etan?
Etan (Nimbus) 15:28: I mean generally yes it's just this the 7688 which I think should be in the same one that adds these validator requests. Like the reindexing on the CL side.
Alex Stokes 15:52: Right so yeah again I think it's going to really complicate things to like think about adding more things to Pectra, Pectra A. But we can have the conversation. Does anyone else feel strongly like in particular different client teams would love to hear your thoughts on this like the 77 I think it's 68 changes and yeah Etan maybe just to add a little context this is the stable container set of features. But essentially just with like Sean scope.
Sean 16:26: So my perspective is that like a big benefit of splitting the Forks like a reason to do it is that we have something that's finalized and so like we could do it quickly. So I would be against adding stable containers to that especially because there weren't like there wasn't like a huge push outside of the like core dev calls I guess from people who want that feature. So yeah I think it makes sense to just like go with devnet 3 don't add things. It might actually ship quickly. That'd be sweet.
Alex Stokes 17:04: Yeah I tend to agree myself. Andrew?
Andrew 17:08: Yeah I think it makes sense to split few sale changes like PeerDAS into a separate Fork. Because or that might entail some like minor changes okay. I'm not sure but say PeerDAS does entail some trivial changes on the EL side that shouldn't be that bigger problem. Why? Basically u i worry about hindering the progress on Verkle. So we on the EL side at least I think so we need some head space to concentrate on Verkle. That's why I propose to keep EOF in in pectra. So that we can ship it but I'm happy to have peerDAS and CL changes in a split fork.
Alex Stokes 18:04: Okay my understanding is that EOF is not quite to the same sort of developmental milestone as the Devnet 3 EIPs. Anyone disagree with that take?
Danno Ferrin 18:19: We have patches out for Geth to pass all the EST tests and we have Besu and reth are ready to implement Nethermind isn't far away. So we're not too far from being able to execute a devnet with at least three but we aren't 100% across all the clients yet.
Alex Stokes 18:40: Okay thanks. So I mean essentially I like I understand the urgency to ship EOF let's say but I just think given all the other considerations it would make more sense to keep it out of this first pectra. Okay so what I'm hearing is that that we're generally on board with Devnet 3 essentially moving on to be the next hard Fork. Yeah obviously there are other things to consider but I think that's the direction we should move in. I don't know if we want to then turn to talking about scoping the second Fork. It might be a little premature.
Matt Nelson 19:32: Well I think the Counterpoint to saying that it's premature to scope the second Fork was what you mentioned earlier that if we don't do it now the commitment to that fork will we risk scope creep of you know the same exact problem we had with this first Pectra.
Alex Stokes 19:47: Right. Guillaume?
Guillaume 19:51: Yeah I think we shouldn't work on the scope of the second four just yet. Precisely because I mean like Matt just said it's going to be a creep but more importantly there's going to be things that change. We're just going to add some bikeshedding before that you know that is not necessary. What we need to do now is ship that Fork..
Alex Stokes 20:15: Right and you know to address Matt's Point that was just made I like the way at least I would think about it is like essentially I would try very hard to stick to Pectra B being just peerDAS and sort of the Blob set of EIPs along with EOF that being said. It'll be a conversation and just going from the chat and things said here. It sounds like it will be hard to sort of commit to just that for the second Fork. But I think we should all keep in mind that the more we touch or change the second Fork the longer it will take to ship. Hopefully we have it sooner than nine months, Marius. Okay so I guess this is final call let's move ahead with Pectra being Pectra A which is essentially devnet 3. Are we clear all with that?
ethDreamer(Mark) 21:19: Well just to clarify that we have a number of things to modify those EIPs. So we don't really mean devnet 3 we mean Devnet 4 but with those EIPs right or 5 or whatever.
Alex Stokes 21:30: Yeah this is a good point. So there are a number of open points still even with the Devnet 3 scope some of them we'll get to later in the call. But I essentially mean yeah just like that EIP set or that feature set.
Etan(Nimbus) 21:44: So where do you want to put the stable container stuff into Pectra 2 or which one?
Alex Stokes 21:52: Well I mean I think it's a conversation with everyone here. I think yeah pectra 2 is a natural Target.
Arnetheduck 22:01: My understanding it's as good as ready in most clients and it actually has user interest as well like contract developers have problems with this today. So pushing that back is seems kind of pointless. It's also CL only.
Alex Stokes 22:25: Right so I mean the question for me I would think is just like what like it even if it sounds ready and it's maybe a relatively small set of you know say code change. Just doing this like again kicks us down like it moves us further away from getting to like a stable pectra with like a spec freeze and all this and you know that's not the end of the world but I think it's a decision that we should make intentionally. Are there other client teams other than nimbus who feel strongly about this. I believe it's 7768 the stable container EIPs.
Sean 23:07: I oh sorry I feel like even though it's most clients have implementations maybe all clients at this point there's still a degree of testing like we have to update the spec tests at specs. Spec test specific to it and then just the fact that it's not as battle tested we'd have to do fuzz on it. So it's like it does ex add more work to what we have even if we're mostly like Dev complete. So in my opinion we should put it in the second Pectra for but yeah it's my opinion.
Alex Stokes 23:44: Thank you Guillaume?
Guillaume 23:46: Yeah yeah so I did implement the containers on my own Library like two months ago. Yeah it looks cool but I just wonder if it's worth shipping before we actually need it. And okay I for when haven't understood why we need this okay except for verkle actually but apart from that is there an actual need right now to to ship this?
Etan (Nimbus) 24:12: It's the 4788 smart contracts and client applications like rocket pool has a decentralized staking pool and they want without any multi seek to be able to prove for example that the validator was activated. So they want to prove relative to the beacon State trout and that the validator is now there that it's not slashed and access that information from the smart contract. So what they have to do today is that when unrelated stuff gets added such as like consolidation requests or whatever. The size of the structure changes of the beacon State and they have to keep updating the code every single Fork even if only stuff changes that doesn't affect them right it's like if you install a new OS update and you have to recompile all your apps every time even if none of the APIs change that you're actually using so that's what's being addressed here it's a benefit for the couple users who want to use like verification in a smart contract relative to the beacon State. For ourselves the core developers we don't benefit that much from it like we can optimize a couple things by having fewer hash computations. If it's like an optimized implementation but the main benefit is for like those developers and indirectly also the users of the ecosystem because it's easier to create trust minimized applications when you have lower maintenance effort. But I agree it's not like the prime most important feature. The only reason why it's is a good timing in the pectra is because this reindexing on the beacon state is already happening for other reasons. So all those teams that rely on EIP 4788 have to update their code have to redeploy the contracts based on new Indices. And that's why the timing is tied to those request EIPs. We can obviously also ship it separately but it means that users will have to go through the migration step yet another time. Yes the deposits and the withdrawals are more important of course like but I don't think they are mutually exclusive. Like the stable container is a CL only feature.
Alex Stokes 27:07: Right so then the question becomes like do we for this particular set of users say a 4788 do we like potentially delay Pectra even more to account for them. And I don't know. I think maybe to like re similar things a bit. I'll let go with Tim is saying here like the whole idea with the split is to ship the next hard Fork as quickly as we safely can. And you know I think what that means is we really stick like we do like we really should stick to like a very strict scope. Like I think it's already going to be tricky to get to like say spec freeze of first pectra you know in the next couple say month or two like next couple weeks like ASAP just as is. And like yeah I hear that it's you know the de nets for the EIP has been going like you know code generally ready or mostly ready like I hear that. But every little thing will just push out timelines further and further. So yeah I hear all of that I would lean towards deferring to later Fork. Okay there's number of different views in the chats. Yeah seems like we're kind of slow on this. Do we keep things as is or do we want to think about including this SSZ EIP into first pectra? Okay there's my chat going on sorry for the silence. I'm trying to catch up. I mean maybe to ask this question like if we do commit to Pectra Aa being Devnet 3 plus polish is that the end of the world like. Yeah I understand that there will be you know a little more Downstream code change for users say of these journalized indices. Do we yeah I guess I mean It's Tricky. But I really do think the right decision here is to think you know with a better scope and just say hey yes this is a sort of consequence of this decision but ultimately we'll get to a faster pectra A and just keep momentum generally higher for the protocol broadly. Yeah Ahmad?
Ahmad 30:31: I'm just going to say whatever I said in the chat which is I feel like it's either we split whatever is CFIed and slated for inclusion in current pectra and not add anything extra to pectra B or we're going to end up with basically scoping a whole other Fork after this Pectra fork. And everyone is going to end up wanting to add EIPs to that Pectra B fork and we're going to might we might end up with Pectra C and then goes on. Like I don't know I don't think this is the right way to go. Either we are strict or we are totally not or or we stick with whatever we have. Sorry which which is which is sticking with one fork.
Alex Stokes 31:26: Right and that's the thing is like I think everyone agrees that the current Fork is just too big. Right so then I think a split makes sense and then from there I think if we do split what makes the most sense in terms of again like very broadly from a very broad perspective keeping everything moving and sort of our shipping velocity High. Adding more things to especially pectra A is going to be quite risky. Mark?
Mark 31:55: Yeah so if the main difference like there's a difference here between people who think we can. And we can and or that we won't be able to limit the scope and then there's a split of people that believe like it's worth splitting up if we could limit the scope. I guess like the main question as to whether or not we can do this is are there enough people that are willing to just not like in 9 months or whatever however many months when people are talking about it are you willing to not let people put anything in you know. Like if you're on the team that you want it split up and you want us to limit the scope can you commit to just in these calls not letting anyone add anything.
Tim Beiko 32:53: So one I think there's two different things or more than in two different things but basically one is do we want to ship something relatively quickly and I think the only path there is we split Pectra along the lines of what's already been implemented. There's a bunch of stuff that different people want to add for different good reasons but I think as soon as we open that can of worms it becomes is pretty much Intractable. And yeah so I think de three is like the right shelling point for if we want to split today. I think the other question is if we do Pectra B there's it's clear that PeerDAS should be the main thing there. On the CL side on the EL side I guess there's like some disagreement between the relative priority of EOF and verkle. And that's probably the biggest deciding factor. And then I think the other thing for pectra B is like do we want a bunch of small things come in. And then and yeah sorry Potuz by Devnet 3 I mean like the EIPs in Devnet 3 not like the actual spec as is. I know there's like a bunch of changes on all of the or most of the EIPs. And I think if we if we commit to like yeah assuming we split the fork on the CL side it does feel like PeerDAS is the obvious next thing to work on. On the EL side there's a question between verkle and EOF and then there's a question on both sides between do we want like a bunch of other small things to come in the fork or not. And then it doesn't seem crazy to like try and resolve those things separately. One is to say, "Okay, do we want to ship something as soon as possible. And then two is to say assuming we split it we're obviously removing EOF from this Fork do we want this to be the focus of Pectra B and there's some comments in the chat right now saying that they strongly support that. And then C is assuming you know we have PeerDAS in EOF and Pectra B. Are there other small changes that are not like proposed yet that we think are actually quite valuable to include. And how do they affect development timelines and I think the thing that becomes hard there is like drawing the line of with like a small change versus not. But they're effectively like three sequential decisions.
Alex Stokes 35:37: Yeah and I think just going sort of by that framework like it does seem like like yes the SSZ features are important like in isolation. They do seem to kind of fall in this bucket of things that you know are not quite ready. Say if we want to sort of slit off pectra A today if only just because you know it has been part of core pectra Devnets yet like there'll be tooling updates like testing security there are many things that will need to be part of that process before we want to sort of say this is now in pectra A. And yeah it's hard for me to see how we could sort of accomplish the best vision of all of this. If we include it in this first Fork. So to that point like I would propose we go ahead and say again pectra A is Devnet 3 plus the Polish it needs. We can have the conversation to put SSZ EIPs into the next fork. And probably handle the scoping conversation there a different day. Guillaume?
Guillaume 36:42: Yeah I just said an idea if you want to be able to keep some flexibility for Pectra B while still making sure it doesn't blow up you could introduce a rule. For example that if you add any EIP you have to remove two from what is already scheduled. This way you know presumably only the better EIPs that everybody's can get behind are are going to be scheduled. And I think it will it will you know make people think twice about pushing new stuff.
Alex Stokes 37:18: Yeah I think that's going to be really hard to do in practice. But it's an interesting way to think about it. Andrew?
Andrew 37:29: Yeah I'm just thinking that I don't see why we should join PeerDAS with say pectra B on the EL side. I'm thinking that they can be developed quite independently because PeerDAS requires minimal if non EL changes. We on the EL side we'll have to discuss assuming we do this Pectra split. Then we'll have to discuss whether we do Verkle first or EOF first because yeah there is a dependency but other than that I would develop say we decide to do verkle before EOF. Then I would develop I would have two sets of testnet for EOF and for verkle. And then they can progress quite independently. And we still have to do some okay say for verkle that might require some minimal CL changes. And the same thing for PeerDAS. And we'll do that but the bulk of effort will be quite independent. And then we basically deliver whatever whichever is ready first.
Alex Stokes 38:47: All right okay so how do we start talking about dropping you off. I don't know if we necessarily did?
Andrew 39:01: Well my point is that we should have a discussion like like honestly we have to discuss whether we do Verkle first or EOF first we we'll have to have this discussion on the EL side. But I was just saying assuming we prioritize verkle over EOF.
Alex Stokes 39:22: Right and I mean again I would just Echo what's in the chat here like I think we can just see which ready which is ready first from my somewhat limited perspective just right now it seems like EOF is much further along than verkle.
Guillaume 39:36: I wouldn't be so sure.
Alex Stokes 39:37: No? Okay so again before this like explodes and we just spend the All call talking about a feature Fork before we've even decided the first one. I would like to Ingress to make a decision on Pectra today. And I would strongly suggest we have the understanding that there is like you know features from current developments that would make a lot of sense in the next Fork. But yeah I think we can have that conversation especially in the context of an EL call even next week or sometime soon in the future. Dragon?
Dragonrakita 40:17: I think we starting again making decision like in the call this kind of decision where there is a lot of team that needs to give their opinion. I think they're best done in like in Reth form. So we can compare the stances of all teams for example in this call we started talking about Verkle before EOF. While the main point of the split was to speed up the prague. So we are jumping from one topic to another topic depending on the various teams incentives that's protectify but the main point get lost in that. I would yeah I ask teams to as ol did put it last week in the writing say your like stance of this. And we can compare it and be a lot easier to see the various teams where we are basically going. That's it for me.
Alex Stokes 41:22: Yeah thanks. That makes sense. Mark we'll take your comment and then I think we'll try to move ahead.
Mark 41:31: Yeah it does seem like Tim pointed out there's multiple decisions that go into this one decision and so I mean could we I like what he said that we need the teams to get together and do things in writing. So could we commit to a timeline for deciding the pectra B scope today? Like not the pectra B scope but just like by some time we will decide it and then it won't be changed after that point and then we all get together put out our arguments debate and then we'll have a call and we'll decide.
Alex Stokes 42:17: Yeah I think It's tricky to kind of make a deadline just given everything. You know if we want to try to have a view on Pectra B in the next two weeks that would be great. We could soft like I don't know like I think everyone this is top of mind for everyone so like I think naturally it will happen pretty soon but yeah it's hard to just like declare a deadline and then just assume that will happen given all the uncertainties.
Mark 42:49: Maybe stop.
Alex Stokes 42:53: Well yeah I mean I could just say two weeks from now and then we'll see what happens. I think there's going to be like I just think there are a lot of uncertainties into like Pectra B. And so yeah it sounds like at least just from the conversation call it sounds like the conversation is a little more complicated than like just split current pectra into A and B which is just where my hesitation is coming from. But generally yeah I think the faster we can make a decision the better. So there are other things to get to on the agenda today. So yeah let's recenter and there seems to be agreement to split current pectra somehow I think there's very strong support to have again this like Devnet 3 plus polish idea be the pectra fork. And then Downstream we can figure out what comes next. Can we all agree to this? We have a thumbs up.
Guillaume 43:58: Yes.
Alex Stokes 43:59: There were other thumbs up in the chat to this idea earlier. Okay let's go ahead and call it. Okay thank you everyone. And yeah I guess sort of the takeaway for everyone else is be thinking about Pectra B and expect to have this conversation in the next few weeks. Okay. Thank you. Okay let me look at the agenda. So there are some blob things to discuss. And yeah it's a little we might kind of jump around a little bit but I think the right way to move forward now is then to turn towards this sort of Pectra A scope. There are a number of PRs open still in flight for some of the pectra a features. And let's just touch base on a few of them right now so this first one I had Mikhail had this PR. Let me grab a link to it here. He and I were talking about this and you know one thing at least that I'm trying to do is even for like the sort of again pectra A scope simplify things as much as we can. I think there still are some open questions around you know where we can simplify what makes sense and how it does to do it one thing that came out of a conversation with Mikhail and is a way to change how we're handling the move to consolidations for a validator the way. It's currently spec is that you can essentially top up as a validator and if you have a credentials change in your deposit message then it would like signal to the chain to move you to this new sort of classic validator. This kind of touches a bunch of other things around you know week sub activity deposit handling deposit flow and yeah it was it ended up being a little tricky. One simplification here is actually to change how we handle this. And essentially rather than have top ups and like the deposit flow be the let's say infrastructure to surface that signal from a validator we move it somewhere else and in particular the suggestion was to move it to the consolidation pipeline. And one way you could think about this is just if I have a consolidation coming from the EL if I just say source and Target validator are the same one meaning sort of like a self- consolidation we kind of piggyback on that as a signal to switch to this compounding State. That was a lot. Is Mikhail on the call? He might have other yeah I don't know if you have anything else you want to add this one's kind of hairy but I just want to bring it up because it does kind of change how we're thinking about this particular feature.
Mikhail 46:57: Yup! Thanks Alex. Pretty much it and yeah it it also entails an update to the next PR that is on the agenda to the 6110 PR. It's been updated with removing the switch to compound from the deposit pipeline so I think we should wait for this one. We're talking about how to get merged and and then merge the 6110. That's good time.
Alex Stokes 47:30: Okay yeah sounds good and I put the second PR in the chat these are kind of coupled because again it's just trying to like simplify the deposit flow along with this consolidation flow. And again this does just kind of point to a lot of the complexity even we have in Pectra A today. So that all being said I think the ask here is just recognize that we are changing this feature a little bit in terms of how you would move from a validator to a Consolidated validator or consolidating validator. So yeah please take a look at the PR that would be 3918 that I put in the chat here. And the idea is that this one would be merged and then it kind of should further simplify this next PR 3818 and this has been an issue even from you know earlier this year even like interop around handling the deposits and making sure that the load on the beacon chain cannot become too much. so these are core changes that we need to resolve. and yes please take a look. I don't know if there's anything else to add Mikhail on the 6110 PR.
Mikhail 48:42: Nothing on the PR itself but just wanted to mention the performance stuff. Yeah some some deps were concerned about having these then deposits cue which will be entirely rehashed every EPOCH processing. So it can cause some probably performance issue and I just want to ask if we want to check this and do some performance measurements because before Electra we do not have the lists in the Beacon state that are that we remove elements from the beginning from so. That was kind of the concern and this why we wanted to explore some alternative Que designs you know to make it always depending not remove it from the beginning thing. So yeah just wanted to not ask to forget about this concern. Maybe someone already did some performance measurements and can just share them.
Alex Stokes 49:55: Yeah I'm not sure if oh.
Arnetheduck 49:58: Yeah I mean I look at the performance quite regularly and hashing is like one of the big things that we do with the state which takes actual time. The other one being shuffling right those are the two big thieves there the third one is slightly different which is basically signature verification and replay of deposits it's also something we discussed I think in Kenya. But I would be l to add too much hashing because it affects processing in places which are already significantly heavy such as you know processing. And often it can't be avoided, it can't be cashed away, it can't be you know tweaked.
Alex Stokes 50:54: All right so I think the the takeaway here is that we should look at this and this would be a nice Target for Devnet 3 Devnet 4 again around this first picture set of features and the Polish we do there just to get it on people's radar. I think there are at least potential other concerns around not just the deposits but also consolidations and what is the other one there's essentially like a number of queues that we're adding to the state and they're kind of all all relevant here. So you know when I say Devnet 3 plus polish this is kind of what I mean is I think there will be a number of things that we need to you know ideally we could even get benchmarks and implementations today. And use that to inform where we go from there. So something to keep in mind. Okay so yeah there's those two PRs again please take a look when you have a minute. Next up there were two more PRs again in the pectra A scope around the attestation updates. So as far as I understand, essentially like the points actually let me grab them for the chat but essentially it's a number of PRs to make handling attestations easier in clients just given the changes is from let's see it's 7549. I think the attestation EIP sorry if I forgot the number but essentially these are simplifications at least as proposed for that future set and we had a bit of a temperature check on the last call. And I guess yeah I'm just bringing them here again as another temperature check. If we are serious about having these in pectra then we should make a decision and keep these moving along. There were some comments on the PRs here around again concerns of complexity you know and again we just kind of get back to the same question like yes we could imagine sort of a perfect world but we might have to stick with a good enough world. Anyone here if you've had a time a chance to look at these PRs it'd be good to discuss how we think about them? Terrence?
Terence 53:26: I started implementing 3900 which is the a single attestation. I think it's a very nice to have like it just feels right. It's very clean but I do worry about like the complexity. So it's just one of those things that from the spec perspective it looks very simple but once you like added it to the Code base then you see this cascade of changes that you have to make. I mean but I do think this is much like easier in terms of complexity versus the attestations one previously but yeah I just wanted to make like a minor warning on that another. I also do have a question just on the spec side you also it only makes changes to the P2P interface. But as a validator I need to be signing and sending those attestation out right this is also part of the honest validator Duty change.
Alex Stokes 54:27: Right with the implication being that you know this might sound like a simple change. But it might end up being quite a bit more work than it sounds like which is also the issue had with this EIP in the first place. So that's now the call to make yeah where do we sort of draw the line on good enough versus shipping the perfect solution.
Sean 54:50: So yeah Etan was working on this from our team and he had similar feedback. I think if we wanted to ship the pectra one quickly we probably shouldn't include this change just because it's like a lot of extra work for like not any tangible gain in the clients. Although I think it's good from like the spec perspective. Yeah I was going to say something else but I forgot I was gonna say.
Arnetheduck 55:23: I can answer the question at least for a honest validator part. Can be updated doesn't have to you can convert between the two formats freely. The networking part like what we put on the gossip channel is important for two reasons. One is the DOS possibility that we close with single attestation which is very easy to exploit today if we wanted to if somebody wanted to. And that sort of closes the Gap so that's the immediate kind of benefit that the PR gives it's a security update basically. And at the same time it's also an efficiency update because we'll be spending less CPU processing attestations if we include it. Which those that follow performance like hashing takes time there's a lot of attestations to process and the more validators we have the more that particular point grows.
Terence 56:36: So if validator doesn't do this then are we expecting there just like notes basically will convert to a different format and just forward it again so you can assume there's like honest behavior in the P2P for this?
Arnetheduck 56:55: To be honest I didn't re like I didn't really consider it if I was looking at it after your comment I think I would include the honest validator change as well. I think that's reasonable but it doesn't matter for let's say the bulk of the PR right. It's better to have it in on the respect as well. I agree but it's not critical so if we wanted to have kind of a minimum impact on clients change then you can actually do all the translation in the gossip layer. And then many of these you know cascading code changes concerns can be swept under the rug. I would say.
Terence 57:57: But when you do the translation you still have to do some CPU cycles and compute right. So is that less than today basically because when you do the translation you still have to get the Committees you have to like figure out like which bit to flip in the aggregation field for example or yeah something like that. So there is also compute there right.
Arnetheduck 58:21: Not really it's all information that you have. I mean in order to know whether you're supposed to create an attestation to begin with you have to load the shuffling. Because that's what determines which slot you're actually sending the attestation in. So that is all information that you have at hand when you're doing that processing. I agree it's kind of pointless to not make the honest validator change. But it's not strictly necessary that's what I'm trying to say and it doesn't affect the prime benefit either which is that nodes validating the single attestation can do the signature check before doing the shuffling with this change. Those that produce it they have to get the shuffling regardless. So it's kind of again like I appreciate your comment. It's really good and I think we should update the PR but that's you know that's the nice to have of this change.
Terence 59:27: Yeah I think if this PR would to go in I would definitely prefer the honest validator change as well just because it feels right to me but yeah other than that I have no strong opinion.
Arnetheduck 59:38: All right I'll just throw it in there. I think I kind of agree.
Alex Stokes 59:43: Okay so then to move forward on the doc if you can add the Validator guide changes to this PR. I think that would help people sort of think about it and see the full scope. And I think I'll try to get maybe another client team to look at implementation at least to like weigh in. And then we could try to make a decision on the next CL call. I think that's a nice path forward unless anyone disagrees. Okay cool! So there are those and then next on the agenda okay I want to be mindful time because there are some other things to discuss. But I did want to bring up again this PR from Felix to think about how we handle the request and the engine API. Yeah I feel like on the last call we were pretty close to saying this was good to go. There was an update requested around I think some of the ordering semantics which I believe has been added. Can we go ahead and agree to this change or are there any dates or questions we'd have about this change?
Mikhail 1:01:12: There is probably an idea to keep requests to not keep requests on the EL at all. So because they are now capped by CL and only keep the request throughout in the EL block header as a commitment that will be validated by the EL. Given the requests sent from the CL by why engine API. That's probably the alternative to this PR. So EL can do can compute this request route. So this commitment in a way it wants and it will not affect anyhow the CL part. I don't know how much do want to explore towards this idea but yeah at least it can be an alternative.
Alex Stokes 1:02:11: Okay. Thanks. Okay yeah I'm not sure we're going to make a decision on this on this call. I would hope we can next week. Also you know this one goes across layer. So also want the EL perspective as well. So that might be that for now if there are any other comments on this set of changes Now's the Time otherwise we'll go ahead and move forward.
Okay so next up we're a batch of things essentially looking now. Like okay we have Pectra A sort of again this Devnet 3 plus different polish things that we've been discussing there are a number of other PRs. They're all pretty simple sort of one by one. So maybe I'll just call them out but there's this one to get rid of the skip payload bodies V2 set of methods. I'll just post this here. Let's see, Mikhail you have this PR. I don't know if there anything else you would like to add. My understanding was that basically we added this for Pectra. When it wasn't as clear how we'd handle the request and it's just actually simply no longer necessary. So it makes sense to go ahead and delete which is what this PR does.
Mikhail 1:03:41: Yeah basically it and it's just you know announce and to do the check if anything is missed and we should not remove those methods. But I think we should if there is other opinion please let us know before the next call.
Alex Stokes 1:04:05: Okay yeah I lean towards merging this even sooner than the next call. But yeah we'll at least give it a few more days. So please take a look and add. Any feedback to the PR and then otherwise there was a batch of things essentially again around polish for Pectra and kind of saying you know with the devnet 3 feature set what do we need to do there's a bunch of things around APIis and connections between you know the beacon node by our client. There's the Builder pipeline there's a bunch of sort of pural things that are critical and need to be addressed. So I called some of them out what is relevant here there was. This PR for the engine API which just reflects how we're going to handle the request. And that's in progress another one would be the changes to the beacon APIs. I think there are still a few things there but let me just drop this in the chat. So people can take a look at that and otherwise let me grab the Builder specs PR as well. Give me one second this one is here so point being is you know moving ahead with Devnet 3 plus polish as Pectra. Now as the time to be thinking about you know these various apis so Beacon APIs Builder APIs all the other things. Another one that is relevant is the remote signer and like web3 signer. I don't know if anyone here on the call is more involved with that and would like to give an update there. it would at least be nice to get a acknowledgement that someone is keeping track of it. Okay. That's okay we'll follow up async with that. And great so yeah again take away here just be aware of these PRs. There are some sort of minor changes to all these APIs to reflect the latest Pectra EIPs and that's something we very much want to get settled as quickly as possible so that we can facilitate Pectra mainnet as quickly as possible. Okay that was more or less it for Pectra if there's nothing else we can move on to PeerDAS and we do have I believe a presentation around the Blob counts which then also just kind of funnel back into this Fork scoping discussion. But maybe we'll quickly just start with any Devnet updates for peerDAS. I know they've kind of been in progress for a little while. I don't know if there's anything to update there. First, Yeah Arnetheduck.
Arnetheduck 1:07:13: Yeah there's one more spec PR. I wouldn't mention which is the TTD timeout thing to get rid of it. I haven't seen anyone against. So maybe we can just go ahead with it.
Alex Stokes 1:07:27: Okay yeah I haven't looked at this in a little while. Yeah how about I'll just keep track of this and we can discuss either at the end of the call or probably the next call. But yeah I agree that this journal looked like it was moving in the right direction to merge. I don't know if anyone else has comments on this right now. Okay we got to I like it. But yeah this would be I think pretty in scope for the Polish of Pectra. And otherwise if you're listening yeah client teams take a look at this PR there's been a good bit of conversation already but yeah we I believe we discussed this interrop and yeah it seemed like everyone was on board so just one other thing to keep track of. Okay thank you any PeerDAS Devnet updates.
Barnabas 1:08:40: We are working on it.
Alex Stokes 1:08:43: Okay great! So yeah they're in progress and again yeah it just kind of reflects that yeah the it's a little bit earlier than in the development pipeline than some of these other things we have going on. Thanks for that. Cool! So then the next thing on the agenda there was quite a bit to get through today. I think so. Is it Francis I think Francis should be on the call. I think they wanted to give presentation. Oh hey yeah I'll just hand it over to you. I think you have want to talk about blob limits.
Francis 1:09:22: Okay can everybody see it? Okay cool thank you. So hello everyone my name is Francis. I'm from the basu team. I'm a protocol engineer there and three grad I have this opportunity to talk to you guys about this. So like I think we have a lot of time to discuss about like the potential stability about Pectra A and Pectra B. It seems like the consensus alignment right now is that pure a will take the current Devnet 3 like ready EIPs and ship. And we will put peerDAS into Pectra B and potentially ship later. So our main point here is about like the Rob capacity for like L2s in general and if we are putting PeerDAS into the next hard fork which is kind of like 9 to 12 months later then we think that the current capacity three Target Blob will not be enough for L2s to scale in the coming months. Especially with L2s continuously to grow and with base we have this scaling plan to increase our like Target gas per second from 10 mega gas to like 25 Mega gas per second till the end of the year. So the point is that the capacity might not be enough and we would like to recommend a slide modification to the current proposal like in scope for Pectra A which is that we integrate a blob number increase into Pectra A. So like how do we do that there are two like ways which is like pretty I think it's pretty straightforward the first one is like we let's increase the Target and Max for the prop number. So we set we can set Target to four and Max to eight and or we can do just a Target increase and keep the max blob number to Six which helps keep the like worst case scenario like in check. We understand that there are like a lot concerns around the Network bandwidth essentially their effects on the solo stakers. So we did some like very initial analysis and we would like to share it with everybody here. So regarding the network bandwidth we are like to be to be clear we are using six Target and 12 Max as a like for Street purposes and all the number here right now are average Network bandwidths. So there's like really good concerns about the peak bandwidths and we can get into that a little bit later. But right now let's talk about the average. So for the past week the CL clients just for the just P2P data and the EL clients they are using like 400 kilobytes and 140 kilobytes inbounds like data for the past week and for the like outbound data it's around like 670 kilobytes in total. And we have some data from like expand ops teams dashboard that showcases the entire data coming in and out of the like container which includes the RPC and Beacon API traffic which you can see is naturally a little bit more than the just P2P data. But the point here is that like P2P data is kind of like the main data in and out of the like CL and EL client. So if we are adding like three extra Target blobs to the Network we are going to have like 54 kilobytes of data addition to inbound and this 384 kilobytes of outbound data to in addition and if we add them like together in total the P2P data in after the blob increase will be 300 kilobytes and the total P2P data out will be one megabyte per seconds. And this is where I think it's really important to talk about not only average like data but also the like Peak use cases or the worth cases. So if we say that okay everybody has to disseminate the blobs like within two seconds or like four seconds as the current like spec defined. Then we can like in wor cases without any other like change like to like to detailed analysis we can say that okay we can say okay for the block out data it could be like 3 megabytes per second or even if you have to determine then within 2 seconds you it could be like six megabytes per second like counting everything into that two second slot. So I guess the question here is that is that reasonable enough for Solo stakers who are like band bandwidth constraint. I did some kind of really really quick search around the upload bandwidth across the world. It seems like for the 200s country their average upload speed would be like three something megabytes per second per some like testing Network testing website. So this is just like for thoughts. But we believe that if we keep the max in check and only increase the target you'll be definitely reasonable in terms of like a Target blob increase. So the second point is like for imposing a high minimum price for data the current argument is that the validators are not fairly comp compensated for their compute storage and bandwidth required for providing the data availability. And we believe that the problem is definitely legit but addressing them by constraining the DA capacity is a stepinng the wrong direction. Instead we would suggest that a different approach which is that we can work on something that is both ensures us to have a healthy market for props but also keeps the target blob number unconstrained for L2 to scale. And that I believe can be orthogonal to like increasing the BL Targets in this hard fork and can be worked down in parallel maybe into the next hard fork or the next one like after that. I believe that is all my presentation. I'm curious about your guys' thoughts on like if you are okay with going with either of these choices or if you have any other concerns? Thank you.
Alex Stokes 1:15:58: Great! Thanks, yeah it's really nice to see and thanks for putting that together. Yeah these are interesting options and it's something we should discuss obviously again the data facility of ethereum is one of our most important features. So it's very important to support it where we can. And one way we can do that is think about changing the blob numbers even in pectra. To that I would have to say I'm going to maintain my earlier position which is just that you know we really should leave Pectra as is. Work on scope freeze and not think about changing anything else that being said. I will just kind of summarize. I think what's kind of in the chat in particular. Like Ansgar here had a nice message that changing the max. I think has a lot of implications changing just the target is a little more reasonable and is like a much more sort of narrowly scope change. Yeah I think I think that's my take. We lean towards leaving things alone for Pectra and then just focus on all the Blob things in the next work after Pectra. I would be open to a Target adjustment. Anyone else want to chime in.
Dankrad Feist 1:17:22: I wonder if it's worth like what is the progress on peer-to-peer improvements like it feels like we there's still a lot of things that we could improve in P2P top to reduce these bandwidths and like then it would probably easier to justify increasing both Target and Max. Is there like is there any update on these things?
Alex Stokes 1:17:55: My sense is that that's kind of been thought about under the sort of work stream of peerDAS. So we have the state there which is still pretty early. Any other client teams have you looked into this at all. I think there's like a basket of different things we could think about to help the bandwidth consumption.
Dankrad Feist 1:18:15: I mean the main thing isn't wasn't that I don't want discussion where we said like we reduce the peer-to-peer Factor on blobs especially in order to reduce the bandwidth like that seemed like a very promising Direction. So I wonder if that's like are we further on that.
Arnetheduck 1:18:39: It's been merged in the LI P2P spec but the LI P2P spec is moving very slowly at these in these days. Nimbus has an implementation. I know Lighthouse was working on one I don't know about the others.
Dankrad Feist 1:18:53: Right.
Pop 1:18:58: I think both lighthouse and and Prysm have the LI P2P implemented already like yeah because the underlying LI P2P both in go and Reth have I what implemented already. I think.
Dankrad Feist 1:19:14: And have we seen improvements from that or not?
Pop 1:19:20: I don't know it has been deploy already or not but yeah maybe the Client team can how about it.
Potuz 1:19:27: I think we haven't shipped it. So it's for the next release so we can only measure this impact on this in several weeks. It takes a lot of time to users to actually update.
Dankrad Feist 1:19:39: Right but would it be fair to say if we get like a large saving from saving from this then we can commit like giving that to L2 in the form of a Target and Max increase.
Arnetheduck 1:20:00: So I can just give a random number it's very easy we have a dashboard that tracks how much we save and it's on the order of 40 kilobytes per second right now for I don't want on the mainnet node with you know standard Nimbus settings at least. Which may not sound like a lot but it's like compared to those numbers that were Present it actually isn't nothing Either. This number will grow with the number of peers that Implement. I don't want. So it depends on everybody deploying it it's a feature that needs to deploy it.
Dankrad Feist 1:20:42: Is there any client that doesn't like isn't going to implement this in the next few months? Okay sounds like there isn't.
Pop 1:21:04: I think there are some but I don't know which one?
Alex Stokes 1:21:07: Yeah sounds like it sounds like we we don't know right now. Potuz your hand out?
Potuz 1:21:12: Right yeah I had like three unrelated well or somewhat related comments. So for the first one you to get a good measurement you don't really need all clients. You need the largest node operators that are running essentially prysm and Lighthouse to update And and then we're going to get like statistically relevant numbers. And I think regarding your comment I think everyone would agree that if we have a good measurement and we have like a good experiment that shows that hom stakers are are capable of taking an increase then I think it's fine we can go for it. Although we've been already in situation in which Builders were sending target or even under target number of blobs and homestakers since the mempool was full were sending six blobs and homestakers were hurt because their bandwidth was taken. So I think we even measuring this is non-trivial. We need to be clear that the average bandwidth is not the right number to look for. We have currently when you submit your blog you have less than two seconds to broadcast to everyone and everyone to validate and consider it available. So again I repeat that 7732 is very far in advance and it gives you 12 seconds instead of two. So it solves most of these problems as a no-brainer.
Dankrad Feist 1:22:40: Right understood. Yeah peerDAS there's also the other one which is that like some clients have implemented that you consider blobs that are the MemPool is available as well.Like I think that could also be a big Improvement especially for home builders who are only going to consider mempool blobs.
Francis 1:23:16: Yeah like I I think all like concerns make sense I'm just curious like if everybody's okay or like open to maybe like just keep the max to be the same. So that it it fixes the like worst case scenario for all these super stakers to be the same as currently. And we increase the prop Target to a higher number maybe four or five.
Dankrad Feist 1:23:42: So I have an additional suggestion. I think like what Potuz mentioned that currently some Builders don't include blobs and then home stakers pick up the slack is an artifact. I'm not saying currently yeah yeah but like that is an artifact of blobs having low tips. So I would encourage especially rollup teams to consider just right now maybe already increasing their tip size. Because right now like everyone who includes blobs is essentially already subsidising blobs because they are not being fairly compensated for the increased offering risk that they're taking. So if we had like sort of like some big rollups agree that like it's a good idea to pay say $5 of tips per blob. I don't think that comes down to anything significant per transaction. But it would probably make a huge difference for the builders on like considering whether they should include blobs.
Francis 1:24:50: Yeah I think this is definitely like great suggestion and we agree that to have everybody like to have the network fairly compensated for the block space is like something we are okay with. So definitely we can do that. But like to maybe a little bit question about this are we saying that the builders are currently not including blobs as they should.
Dankrad Feist 1:25:14: Well everyone is free to build their blocks as they like right but some Builders have at least in the past chosen just to not include blobs at all because they said well there's no incentive to because they don't pay any fees.
Francis 1:25:27: Are we talking about like flash I think previously.
Potuz 1:25:33: I think getting out of hand this was an issue a few months ago and most Builders have fixed this issue flash bots was one with the serializing issue that is already fixed. R sync was another one that was pushing three blobs when the home stakers were pushing six at the time but then they moved into pushing blobs as they were. Geth itself had a problem in the way that they computed the tips that favors or it doesn't favor some rollups that have more or less execution all of these issues. I think are sort of orthogonal to what we're discussing here.
Francis 1:26:11: Yeah that makes sense. So if you I believe that makes sense and we've talked with like Reth team. They haven't they would be probably do it down the pipeline. But like as why we said we are definitely okay with like increasing the tips to like include to maybe help with Builders to include that into their blocks. So that's solo takers can take less of the burdens on that.
Alex Stokes 1:26:50: Right so we are almost at time. And maybe just to get to some resolution for the current topic. It does sound like we need a lot more data. I think to understand what a safe sort of modification for the blob parameters would be especially with peerDAS. For scope reasons and again just like general momentum reasons. I think we want to touch less rather than more that would kind of motivate thinking about just the Target. And separately yeah I think there's a lot of work here especially if we want to roll out gossip sub changes and all this like that's going to take time. Yeah so sounds like we should move ahead on those separately. I don't think we'll necessarily make a final call today. And yeah we'll definitely I think it's very top of mind for everyone. So we'll definitely revisit this before Pectra hits mainnet. Okay yeah we're pretty much at time any closing comments otherwise thank you all for joining. That was quite a big call but a very important one. So yeah if there's nothing else we'll go ahead and wrap up.
- Stokes
- Potuz
- Etan (Nimbus)
- Nixo
- ethDreamer (Mark)
- Mikhail Kalinin
- Marius
- Spencer - Tb
- Ben Edgington
- Barnabas
- Francis (Base)
- Ahmad Bitar
- Benedikt Wagner
- Guillaume
- Enrico Del Fante (tbenr)
- Ignacio
- Kasey
- Mikhail kalinin
- Justin Florentine (Besu)
- Piper Merriam
- Pk910
- Justin Traglia
- Anna Thieser
- Carl Beekhuizen
- Roman
- Paritosh
- Phil NGO
- Lucas Saldanha
- Daniel Lehrmer (Besu)
- Andrew Ashikhmin
- Lightclient
- Sean
- Saulius
- Katya Riazantseva
- Dankrad Feist
- Terence
- Sophia Gold
- Peter
- Mehdi Aouadi
- Nflaig
- Ansgar Dietrichs
- James He
- Matt Nelson
- Jochem (Eth JS)
- Manu
- Stefan Bratanov
- Rohit Ranjan
- Piotr
- Toni Wahrstaetter
- Racytech
- Danno Ferrin
- Arnetheduck
- Francesco
- Hsiao-Wei Wang
- Pop
- Anders Holmbjerg
- Gajinder
- Trent