Prebuilts: Self-Serve Integrations

I wrote down some thoughts about how to approach integrations and build Arbitrables. Let me know what you think.
TL;DR: We could build primitives to take care of common integrations and lower barriers to entry.

(the discussion below is a reposting from an internal discussion thread)


I support this proposal.

1 Like

(reposting from an internal discussion thread)

I can see the value in standardizing the integrations, so that time spent on highly optimizing data structures and smart contracts review is not a sunk cost for each integration.

Some feedback:

  • I would merge the examples with the definition of the primitives.
  • I wouldn’t stake myself on the number 3.
  • Maybe call it an “Integration Model” ?
  • Rephrase the “is this useful section”, make it more to the point, it’s a bit too wordy imho

The model should work for both existing and potential integration.

  • I wouldn’t stake myself on the number 3.

you mean, on putting “three” on the title? or you mean, the “Taskboard” primitive may not be as useful?

“is this useful section” is wordy

agree, on my way to fix it

Same integration model should not imply the same deployment unit, namely deployed smart contract. It just means that we don’t need to re-develop new pieces of code. We can still deploy new smart contracts with the same optimized and audited code but for different integration instances. That gives more flexibility to have different lifecycles for each integration depending on the circumstances of the customer. Also it would break the MetaEvidence right now, as they are unique per contract/integration.

I think it should, actually.
The way MetaEvidence works doesn’t imply you need separate contracts. You could, for example, have a MetaEvidence per setting.
For example, in Stake Curate you (will) have a MetaEvidence per List

you mean, on putting “three” on the title? or you mean, the “Taskboard” primitive may not be as useful?

I meant 3 in the title, there are most definitely more than 3 and we will likely keep discovering more.

The way MetaEvidence works

According to the current standard MetaEvidence refers to the contract which emitted it. That’s the current state of things. Of course if you implied changing the standard anything is possible.

It’s cheaper to do things in hubs. In particular, because of allows addresses be referenced with 64 bits across apps living in the same primitive, and because keeping everything in the same contract is better for L2 packing (if they wanted to deploy in L2)
also creating a List is cheaper than redeploying the contract (in mainnet, 1M gas compared to 50k gas)
I’m not implying changing the standard, it can work with how the current standard is defined.
Say you have a list with listId . You can emit a MetaEvidence(listId, _evidence) with the standard MetaEvidence
You can have multiple MetaEvidences per contract (e.g. Light Curate has different MetaEvidences for submitting and removing)
It’s true that I’m “fiddling with the rules” a bit because the standard doesn’t define about if emitting multiple MetaEvidence with the same _metaEvidenceID from the same contract is fine, and what would happen.
And the way I’m assuming you can change policies for a list, is just “overwriting” the MetaEvidence. Which is probably wrong
What I mean is, if I wanted to make a Wines TCR, I just create a list with whatever settings I need. That list gets listId 43. I build my frontend with that listId in mind, querying items living in that list, etc.

Yeah I think we assume overwriting currently. That’s up to the client code at that point (webapp, bots).

What I was trying to say is, perhaps it is desirable for some integrations to be deployed in the same unit, for examples the lists as you mentioned, but not necessarily. The integration model doesn’t have to be prescriptive on the deployment side of things as they are different concerns.

So the artifacts we need for an integration with a customer could include:

  • The arbitration/subcourt policy
  • The integration model = which piece of code we reuse, highlighting any custom code needed for the full stack
  • The deployment model = which production services and infrastructure resources needed for the full stack
  • Some operations document about upgrades and security procedures for this integration. For example Reality.eth has 3 versions of its protocol and we have different Arbitrable for different Reality versions
  • More?

Yeah… I think what I’m proposing in that doc are not actually “Primitives” after all, they’re more like Prebuilts (convenient, limited but prebuilt solutions that can be useful by projects willing to limit themselves to certain features)
I agree limiting Primitives as I’m doing in the doc is limiting. that was the point. It may be a bad idea.
Maybe I can rename this idea to “Prebuilts” and we could figure out the Primitives in a different project that allows for upgradability, etc
just, my argument is that many integrations won’t need all that customizing anyway, even if they are convinced they do

Just reading through the thread here and I think (gonna use prebuild=primitive in my response for ease) :

  1. Yes it makes sense for us to start drilling down to the ‘prebuilds’ that can be reused over and over, whether through reusing the same instance or redeployment.
  2. I agree with Prebuild 1’s use case
  3. For Prebuild 2, it sounds more like the Reality.eth+Gnosis SafeSnap use case than insurance?
  4. For Prebuild 3, this is an interesting approach, which our partners have not suggested/mentioned so far

There is always a tension between business-driven vs product-driven development in any organisation, and I think it’s very helpful to let ‘business demand’ (if we can even use this word in our context) inform where we should focus on.

For Prebuild 3: unless we are entering the micro-task/‘mechanical turk’ markets ourselves, I don’t yet see a partner with a viable and competitive model. Traditional platforms like Fiverr and Upwork are actually excellent for jobs between $10-$10k, and the gas costs and stakes/deposits needed to decentralize it are disproportionate to the value at stake. Where I definitely see this market being viable is the higher value jobs, like finding hug bugs in smart contracts (aka Hats Finance).

Prebuild 1 is definitely great and we should double down on it, though I think for content moderation, our current Court is better for ‘slower’ actions like retroactive account takedowns and account bans (like in games like League of Legends). More work and research still needs to be done to respond to real-time misinformation censorship (for which a oracle/prediction market model might come closer to being a good solution, e.g. bet on whether something should be taken down within a 1hr betting window, and then escalated to Kleros if stakes are high enough).

Insurance claim management is a fantastic use case for Kleros Dispute Resolver and where we have a great product-market fit, though I don’t see it as meant for Prebuild 2, which seems like a prediction market use case?

Sorry for this word dump :stuck_out_tongue:


For Prebuild 3, I might suggest this way of working to a few partners struggling with a viable model, as it might make more sense in some cases than the traditional escrow-based model.

Prebuild 2 (Predictions / Assertions) is mostly to force agents to have stake in the game for things. e.g. “project Y will have 300% APY!” “EIP 4488 will be deployed before (date)!”.
I think priorities (dev wise) should be Lists >>>> Taskboard > Predictions

Can we also put “a great frontend for Dispute Resolver and Court v2” next to Lists? :grimacing:
I realised that the easiest way for us to get from first-contact to go-live with insurance protocols is to prescribe a standalone integration that uses their multi-sig to execute the results of Kleros Court, with a claim manager contract integration as Step 2.

I like the “Prebuild” distinction Green, there is definitely a spectrum of possibilities for making integrations more reusable.

Perhaps the keyword is that Prebuilds should be self-service: anyone interested in integrating could do it without any friction from the Kleros team.

    Flexible <----------------------|-------------------------> Opinionated
Custom Integration        Deployment Model Reuse                 Prebuild
  Longer to ship              Faster to ship               Self-service by customer
  Most expensive                 Cheaper                         Cheapest
 Anything possible       Limited by new deployment     Limited by code already deployed
                             of existing code    

Prebuilds vs. Functional/business components

  • Functional component examples: Oracle, Curation, Escrow, Governor as listed on the Kleros Services docs.
  • 1 Functional component may be implemented by more than 1 Prebuild, if there is such a need for specific use-cases.

More work and research still needs to be done to respond to real-time misinformation censorship

Totally agree. We need this to crack the content moderation/social media space. Escalation games ala Realitio is the best we have right now, there’s gotta be a better solution.

1 Like

Yeah 100% on the self-service thing.


Suggesting to rename the post to something more self explanatory, maybe “Prebuilds: Reusable Integration Primitives” or “Prebuilds: Self-Service Integrations”.

1 Like

Doesnt fiver take 30% of the profits tho

Just two sidenotes I have doubts about:

  1. Is it really useful to compress addresses down to 64 bits when (at least in my understanding) this is part of the optimizations that rollups are expected to do on all addresses? I guess it will still be useful on mainnet but I wonder if curation will really be a big thing on mainnet. EDIT: Actually, looked up how this address compression works because after thinking about it, it seemed a bit too magical lol. In the case of Arbitrum at least, there’s a global Address Registry that you would probably want to use instead of a local account mapping since that would remove the redundant one-time cost of registering the address (since the address in question would likely have already been registered in the global Address Registry, and if not, is likely to be registered there at some later point anyway).
  2. You mention that creating a contract costs around 1M gas. But I think you can create a proxy contract for very cheap and having a contract address dedicated to some project (rather than a (contract, ID) pair) feels much cleaner IMO. Here’s an example of such a contract being created with 170k gas: That’s still more than the 50k gas you mention, but given the advantage of having a dedicated contract and the fact that this is a one-time operation, I think it’s well worth it.

Thinking about current iterations of Curate, I think the reason for how unbalanced it is incentive wise is that they use the Prediction Primitive system instead of the List Primitive.
PoH has similar problems (no incentive to remove humans) that could be solved with a custom List solution.

I don’t mean the submissions themselves are Predictions, but the data structure us better suited for predictions. This is something I realized while chatting with shotaro, the data structure I used for Slot Curate (inspired on the current iteration) was just perfect for Predictions.
This means, after the deadline is over, it’s out of the game, thus removing any intrinsic motivation to police the submission.

However, one assumption I made when building Stake Curate (the ongoing implementation of the List Prebuilt) is that submitters need intrinsic incentives for their submissions to remain included in the set. This is good for “positive, corruptible lists” such as a list of non-spam accounts. But it may be troublesome for “negative lists” such a list of scammers. Research needed.