I wrote down some thoughts about how to approach integrations and build Arbitrables. Let me know what you think.
TL;DR: We could build primitives to take care of common integrations and lower barriers to entry.
I can see the value in standardizing the integrations, so that time spent on highly optimizing data structures and smart contracts review is not a sunk cost for each integration.
Some feedback:
I would merge the examples with the definition of the primitives.
I wouldnât stake myself on the number 3.
Maybe call it an âIntegration Modelâ ?
Rephrase the âis this useful sectionâ, make it more to the point, itâs a bit too wordy imho
The model should work for both existing and potential integration.
Same integration model should not imply the same deployment unit, namely deployed smart contract. It just means that we donât need to re-develop new pieces of code. We can still deploy new smart contracts with the same optimized and audited code but for different integration instances. That gives more flexibility to have different lifecycles for each integration depending on the circumstances of the customer. Also it would break the MetaEvidence right now, as they are unique per contract/integration.
I think it should, actually.
The way MetaEvidence works doesnât imply you need separate contracts. You could, for example, have a MetaEvidence per setting.
For example, in Stake Curate you (will) have a MetaEvidence per List
you mean, on putting âthreeâ on the title? or you mean, the âTaskboardâ primitive may not be as useful?
I meant 3 in the title, there are most definitely more than 3 and we will likely keep discovering more.
The way MetaEvidence works
According to the current standard MetaEvidence refers to the contract which emitted it. Thatâs the current state of things. Of course if you implied changing the standard anything is possible.
Itâs cheaper to do things in hubs. In particular, because of allows addresses be referenced with 64 bits across apps living in the same primitive, and because keeping everything in the same contract is better for L2 packing (if they wanted to deploy in L2)
also creating a List is cheaper than redeploying the contract (in mainnet, 1M gas compared to 50k gas)
Iâm not implying changing the standard, it can work with how the current standard is defined.
Say you have a list with listId . You can emit a MetaEvidence(listId, _evidence) with the standard MetaEvidence
You can have multiple MetaEvidences per contract (e.g. Light Curate has different MetaEvidences for submitting and removing)
Itâs true that Iâm âfiddling with the rulesâ a bit because the standard doesnât define about if emitting multiple MetaEvidence with the same _metaEvidenceID from the same contract is fine, and what would happen.
And the way Iâm assuming you can change policies for a list, is just âoverwritingâ the MetaEvidence. Which is probably wrong
What I mean is, if I wanted to make a Wines TCR, I just create a list with whatever settings I need. That list gets listId 43. I build my frontend with that listId in mind, querying items living in that list, etc.
Yeah I think we assume overwriting currently. Thatâs up to the client code at that point (webapp, bots).
What I was trying to say is, perhaps it is desirable for some integrations to be deployed in the same unit, for examples the lists as you mentioned, but not necessarily. The integration model doesnât have to be prescriptive on the deployment side of things as they are different concerns.
So the artifacts we need for an integration with a customer could include:
The arbitration/subcourt policy
The integration model = which piece of code we reuse, highlighting any custom code needed for the full stack
The deployment model = which production services and infrastructure resources needed for the full stack
Some operations document about upgrades and security procedures for this integration. For example Reality.eth has 3 versions of its protocol and we have different Arbitrable for different Reality versions
Yeah⌠I think what Iâm proposing in that doc are not actually âPrimitivesâ after all, theyâre more like Prebuilts (convenient, limited but prebuilt solutions that can be useful by projects willing to limit themselves to certain features)
I agree limiting Primitives as Iâm doing in the doc is limiting. that was the point. It may be a bad idea.
Maybe I can rename this idea to âPrebuiltsâ and we could figure out the Primitives in a different project that allows for upgradability, etc
just, my argument is that many integrations wonât need all that customizing anyway, even if they are convinced they do
Just reading through the thread here and I think (gonna use prebuild=primitive in my response for ease) :
Yes it makes sense for us to start drilling down to the âprebuildsâ that can be reused over and over, whether through reusing the same instance or redeployment.
I agree with Prebuild 1âs use case
For Prebuild 2, it sounds more like the Reality.eth+Gnosis SafeSnap use case than insurance?
For Prebuild 3, this is an interesting approach, which our partners have not suggested/mentioned so far
There is always a tension between business-driven vs product-driven development in any organisation, and I think itâs very helpful to let âbusiness demandâ (if we can even use this word in our context) inform where we should focus on.
For Prebuild 3: unless we are entering the micro-task/âmechanical turkâ markets ourselves, I donât yet see a partner with a viable and competitive model. Traditional platforms like Fiverr and Upwork are actually excellent for jobs between $10-$10k, and the gas costs and stakes/deposits needed to decentralize it are disproportionate to the value at stake. Where I definitely see this market being viable is the higher value jobs, like finding hug bugs in smart contracts (aka Hats Finance).
Prebuild 1 is definitely great and we should double down on it, though I think for content moderation, our current Court is better for âslowerâ actions like retroactive account takedowns and account bans (like in games like League of Legends). More work and research still needs to be done to respond to real-time misinformation censorship (for which a oracle/prediction market model might come closer to being a good solution, e.g. bet on whether something should be taken down within a 1hr betting window, and then escalated to Kleros if stakes are high enough).
Insurance claim management is a fantastic use case for Kleros Dispute Resolver and where we have a great product-market fit, though I donât see it as meant for Prebuild 2, which seems like a prediction market use case?
For Prebuild 3, I might suggest this way of working to a few partners struggling with a viable model, as it might make more sense in some cases than the traditional escrow-based model.
Prebuild 2 (Predictions / Assertions) is mostly to force agents to have stake in the game for things. e.g. âproject Y will have 300% APY!â âEIP 4488 will be deployed before (date)!â.
I think priorities (dev wise) should be Lists >>>> Taskboard > Predictions
Can we also put âa great frontend for Dispute Resolver and Court v2â next to Lists?
I realised that the easiest way for us to get from first-contact to go-live with insurance protocols is to prescribe a standalone integration that uses their multi-sig to execute the results of Kleros Court, with a claim manager contract integration as Step 2.
I like the âPrebuildâ distinction Green, there is definitely a spectrum of possibilities for making integrations more reusable.
Perhaps the keyword is that Prebuilds should be self-service: anyone interested in integrating could do it without any friction from the Kleros team.
Flexible <----------------------|-------------------------> Opinionated
Custom Integration Deployment Model Reuse Prebuild
Longer to ship Faster to ship Self-service by customer
Most expensive Cheaper Cheapest
Anything possible Limited by new deployment Limited by code already deployed
of existing code
Prebuilds vs. Functional/business components
Functional component examples: Oracle, Curation, Escrow, Governor as listed on the Kleros Services docs.
1 Functional component may be implemented by more than 1 Prebuild, if there is such a need for specific use-cases.
More work and research still needs to be done to respond to real-time misinformation censorship
Totally agree. We need this to crack the content moderation/social media space. Escalation games ala Realitio is the best we have right now, thereâs gotta be a better solution.
Suggesting to rename the post to something more self explanatory, maybe âPrebuilds: Reusable Integration Primitivesâ or âPrebuilds: Self-Service Integrationsâ.
Is it really useful to compress addresses down to 64 bits when (at least in my understanding) this is part of the optimizations that rollups are expected to do on all addresses? I guess it will still be useful on mainnet but I wonder if curation will really be a big thing on mainnet. EDIT: Actually, looked up how this address compression works because after thinking about it, it seemed a bit too magical lol. In the case of Arbitrum at least, thereâs a global Address Registry that you would probably want to use instead of a local account mapping since that would remove the redundant one-time cost of registering the address (since the address in question would likely have already been registered in the global Address Registry, and if not, is likely to be registered there at some later point anyway).
You mention that creating a contract costs around 1M gas. But I think you can create a proxy contract for very cheap and having a contract address dedicated to some project (rather than a (contract, ID) pair) feels much cleaner IMO. Hereâs an example of such a contract being created with 170k gas: https://etherscan.io/tx/0x1db4f455472eff15638e603f5ce1a081808d029a67231d117fd866c8e31f5854 Thatâs still more than the 50k gas you mention, but given the advantage of having a dedicated contract and the fact that this is a one-time operation, I think itâs well worth it.
Thinking about current iterations of Curate, I think the reason for how unbalanced it is incentive wise is that they use the Prediction Primitive system instead of the List Primitive.
PoH has similar problems (no incentive to remove humans) that could be solved with a custom List solution.
I donât mean the submissions themselves are Predictions, but the data structure us better suited for predictions. This is something I realized while chatting with shotaro, the data structure I used for Slot Curate (inspired on the current iteration) was just perfect for Predictions.
This means, after the deadline is over, itâs out of the game, thus removing any intrinsic motivation to police the submission.
However, one assumption I made when building Stake Curate (the ongoing implementation of the List Prebuilt) is that submitters need intrinsic incentives for their submissions to remain included in the set. This is good for âpositive, corruptible listsâ such as a list of non-spam accounts. But it may be troublesome for ânegative listsâ such a list of scammers. Research needed.