iCKB journey into CoBuild

L1 Future

CoBuild protocol is the future of L1 transactions. By introducing metadata to transactions and showing a clear path towards OTX this protocol will bring Nervos L1 to a new era. Then again by introducing CoBuild, it’s getting more and more complex to code locks that correctly handle signatures, especially in a OTX environment.

As a L1 developer, I’d like to be able to code an application L1 script and still be able to sleep well at night. It would help knowing that signatures are handled correctly, the code handling them is verified and no detail was overlooked. It would be so nice if I could just focus on my business logic while signature unlocking is handled already by a base contract.

Problem

For iCKB I developed a limit order contract that’s partially fulfillable. It works really nicely in a classical L1 environment, but it’s definitely not CoBuild OTX ready as it’s positional based and it may not be safe in a OTX context.

Conceptually a Limit Order lock is very similar to an Anyone Can Pay lock, shortly the lock can be unlocked either:

  1. By the user with signature, directly or indirectly.
  2. By anyone with a transaction that satisfies its business logic.

Reading the CoBuild OTX Overview comments, there is an elegant example on how OTX will enable limit orders, but crucially lacks partial fulfillability. Also I’m not aware of any existing L1 Limit Order script for sUDT.

So these days I was thinking about how to re-design the lock to be CoBuild OTX ready. I evaluated many designs, but one that I found promising was to use Omnilock as base.

Enter Omnilock

Omnilock is probably the most feature rich lock on L1 as it supports many forms of signature unlocking and modes, crucially:

  1. It exists P2SH unlocking, same as for sUDT owner lock, Omnilock will check if the current transaction contains an input cell with a matching lock script.
  2. It exists Administrator Mode, at high level the user can authorize a secondary way of unlocking his lock.

A Limit Order Design

A Limit Order design could use Omnilock as base and be unlocked either:

  1. By the user with signature, as usual.
  2. Using Administrator Mode with limit order business logic included as P2SH.

This design is readily applicable to L1 Classic and depending on how Omnilock is being updated, it could be applicable in L1 CoBuild OTX.

6 Likes

I created a Feature Request on the Omnilock repo to implement P2SH by type Auth :muscle:

4 Likes

There are several issues discussed in this post:

The ability(or inability) to build new locks in the context of cobuild

I’m not gonna lie here, cobuild is indeed complex. It’s the combination of several different factors:

  • L1 CKB is flexible enough, but in a way, the flexibility could be seen as a curse: a huge deal of work is required at script side to make cells secure
  • We do want CKB untyped cells to become typed via cobuild, making collaborations between scripts to be easier. Personally I think this used to be a nightmare, and I do think we should work together to fix the problem
  • There are already code running on CKB, it makes no sense to break them. As a result, a ton of efforts were put into cobuild to ensure compatibility with existing scripts

The good news is, a huge deal of validation logic in cobuild, to me, is just repetitive work. This is still early days of cobuild, but as we continue to build things, I do envision that the majority of cobuild logic, can be distilled into libraries, that one can simply invoke to do the validation, and we are already moving into this direction:

So for future CKB locks, I would never recommend one to manually code all the cobuild related logic unless absolutely necessary. It’s better to just rely on commonly maintained libraries, or as you mentioned, focus on business logic.

OTX

I dabbled on this problem briefly here: A Fine-Grained Control Scheme Based on Cobuild-OTX - #7 by xxuejie. The way I see CKB TX and OTX, is that we now really have 3 different levels of granularity:

  1. A sighash-all signed transaction, where every piece in the transaction is signed. No modification is allowed
  2. An OTX, where input / output cells are fully guarded by OTX signature. No modifications to those cells are allowed
  3. An OTX with only input cells and action objects. No output cells are guarded, or part of the output cells are guarded.

I do agree that partial fulfillability is not possible at level 2 here. But it’s okay to build an OTX for limit orders with only input cells, then use an Action object(it could then use the limit order cell’s lock/type script as Action’s script_hash) to fill in parameters for limit orders, which will enable partial fulfillability. The way I look at this design, is that it is essentially an “atomic approve” function, that basically approves certain amount of tokens(determined by the input cells of the OTX) to a certain type script(limit order cell’s lock/type script here). The lock/type script of limit order cell, can then look against all OTXs in current CKB TX, processes the Action objects of each limit order, then either completely fulfills or partially fulfills each order.

This is indeed contrary to previous OTX design, but fundamentally we are throwing a new question: does an OTX really need to fixate all requirements in the fields of cells? I personally have different opinions, yes we could have apps that fixate both input cells and output cells, but different apps are designed with different minds having different feature sets, there are certainly people who can benefit from OTXs that only fixate input cells(essentially limiting the maximum amount of tokens participating in the actions), and leverage cobuild’s Action objects for complex operations, which they are also designed for.

Omnilock

I could understand the rationale here, but I personally would think it in different ways:

  • An Action object using the lock script’s hash as script_hash, in cobuild’s convention, is already a P2SH design.
  • I would personally recommend keeping app logic in type scripts, and only do signature validation for proving ownership in lock scripts.
5 Likes

Hey @xxuejie, thanks for taking your time to reply!

We do want CKB untyped cells to become typed via cobuild, making collaborations between scripts to be easier. Personally I think this used to be a nightmare, and I do think we should work together to fix the problem

Agreed, actually that was my first thought after I understood L1 workings. iCKB will support CoBuild.

the majority of cobuild logic, can be distilled into libraries, that one can simply invoke to do the validation, and we are already moving into this direction

Glad to see first class Rust support :+1:

The way I see CKB TX and OTX, is that we now really have 3 different levels of granularity […] I do agree that partial fulfillability is not possible at level 2 here.

Depends on the compromises you make. I developed a lock script that supports partial fulfillability at level 1 (since it requires no signatures at all except the first one). It’s nicely running on testnet with last revision of iCKB. @jm9k is running the testing.

The way I look at this design, is that it is essentially an “atomic approve” function, that basically approves certain amount of token […] to a certain type script(limit order cell’s lock/type script here)

Yup yup, adding an Admin Mode with the spending logic is exactly an approval for spending to a certain type script, see feature request. Parameters for spending are stored in the cell data of the Omnilock cell.

The lock/type script of limit order cell, can then look against all OTXs in current CKB TX […] then either completely fulfills or partially fulfills each order.

Same same, of course you need an off-chain entity preparing the transaction.

I would personally recommend keeping app logic in type scripts

Exactly what I’m tryng to achieve with this feature request.

Could you take a look at the feature request and tell me once again if it’s Action compatible or not? If it’s not, why?

5 Likes

Yesterday I studied once again the Proposal and today the NervosDAO POC. I’d say it clicked for me, especially after looking at the dao schema and at the on-chain message validation.

In the message it is possible to specify arbitrary intentions/actions (including arbitrary fields, notice how it’s included waiting_milliseconds in the NervosDAO action, while this field is not directly existent in the transaction representation), it’s up to the on-chain script to validate if the intentions/actions described in the message match with the actual transaction.

So, to reply to my own question, I’d say: yes, representing the proposed limit order with CoBuild Actions is pretty straightforward! :muscle::muscle::muscle:

5 Likes

I think you already figured this out. I can only say that personally, I don’t see a use case here for the administrator mode. I would simply include an Action using iCKB lock’s hash as the scirpt_hash field in Action, and include parameters for limit order in the data field of this very Action. OTX’s existing lock will ensure the parameters are guarded by signatures.

Then in iCKB lock, you can loop through all OTXs in current transaction, extracting the limit order parameters from an Action object in each OTX’s Message field, validate as you want, and fulfill all the iCKB logic.

This way you can really use omnilock, and in fact any cobuild OTX-compatible lock as the lock script. Administrator mode is not needed here. And this is the whole point: cobuild should enable any lock to pair with any type for on-chain activities, you don’t really build your app against a particular lock with a particular feature anymore.

Or to put it in a different way: I think cobuild provides all cobuild-compatible locks with an administrator mode for free.

3 Likes

From this message I understood the following:

  • The expression iCKB lock is used multiple times, not limit order, so I’ll assume iCKB lock stands for iCKB_logic lock, so the lock managing the iCKB protocol.
  • The high level suggestion is to use actions as/instead-of limit orders.

(Please correct me if I misunderstood!! :pray:)

Deposit would be doable without limit orders even if it’s a two step process.

Withdrawals is non doable without supporting partial fulfill-ability. In the past I considered this (actions just formalize a way), the underlying issue is that there is a mismatch between the amount the user (or group of users) needs to withdraw and the deposit sizes in the iCKB pool (also NervosDao doesn’t support partial withdrawals and NervosDao limit of 64 output cells doesn’t help).

(If the solution you propose is able to bypass all these problems while being simpler than partially fulfill-able limit orders, please explain it a bit more! :pray::pray::pray:)

On the other side, a solution that maximizes user experience (and script logic separation) is the use of partially fulfill-able limit orders. Hence the need of a lock script that can be unlocked either directly/indirectly by user signature or limit order partial fulfillment. Need which could be fully satisfied by the proposed Omnilock Feature Request.

1 Like

I have a feeling we might be talking non-sense here without a clear picture of iCKB structure, so I will cease from commenting on the following assertions:

Deposit would be doable without limit orders even if it’s a two step process.
Withdrawals is non doable without supporting partial fulfill-ability.

Maybe they are true, maybe not. There is just not enough information to judge. I will leave that topic till another discussion.

Instead, I will refocus on the main point I want to make here:

  • Cobuild, in a sense, already provides P2SH by lock and P2SH by type for free now.
  • I would not recommend building a CKB app against a particular P2SH by type feature implemented in just omnilock here. It would be better to build apps using conventions from cobuild, so iCKB can ideally work with any potential cobuild-enabled locks. We’ve learned enough lessons, this is an example: Kollect.me 这个项目没了?都不知道有没有NFT?

Let me explain one thing first: to me, the administrator mode in omnilock was designed at an early time, from a wrong set of assumptions, with a poorly design. I would not encourage anyone to build upon administrator mode anymore, it is there for compatibility reasons, but if you really ask me, we should simply ditch it.

The core idea behind P2SH by type auth, correct me if I am wrong, is to defer all the validation logic of the omnilock to another script in the current transaction. Even signature validation part in the omnilock is completely skipped, omnilock will have to trust the deferred type script to perform all necessary security checks.

My question here is: is this really something we want?

In a cobuild-compatible world, my personal vision, is that iCKB users will submit deposit & withdraw requests in the form of cobuild OTXs. Ideally iCKB users can use any cobuild-compatible locks in the input cells of those OTXs but let’s make it simple for now: assume those iCKB OTXs all come with input cells using omnilock as lock scripts. Let’s also assume that P2SH by type auth is implemented, all the input cells in an iCKB OTX, will defer the actual validation work to the iCKB type script included in the full CKB transaction as a type script.

I now have a question to ask: how can the user be sure here, that his / her submitted iCKB OTX will not be tampered with?

The lock scripts of input cells from the iCKB OTX, will simply be omnilocks that defer the actual execution to an external type script. No signature validation is performed here. Any part of the OTX, including input cells, output cells, cobuild messages can be tampered by anyone in anyway.

Yes it can be argued that it is still possible to build a secure implementation here:

  • An iCKB Action object addressed to the iCKB type script is included in the cobuild message of the submitted iCKB OTX
  • iCKB parameters, such as deposit & withdraw details are included in the iCKB Action object
  • A user signed signature covering the whole OTX is included in the iCKB Action as well

The iCKB type script then loops against each OTX and does signature validation.

This solution works, but you are essentially coding against cobuild. There is nothing forbidding you from doing this, the best I can say to you here is, cobuild is not designed to be used this way.

I do want to stress this point one more time: this albeit-working solution is also manually coded against one particular feature implemented by one particular lock, P2SH by type auth is not a commonly-defined feature implemented by all cobuild-compatible locks. We’ve learned enough lessons, I plead you and every CKB developers that are reading this to never design your app against a particular lock or type ever again.

What would be cobuild’s recommended way?

  1. A user creates an iCKB OTX covering his / her deposit or withdraw intentions.
  2. The input cells in this iCKB OTX are free to use any cobuild-compatible locks, a proper signature is generated by the user guarding the whole OTX, and be put in the seals part of the OTX witness data structure
  3. The deposit / withdraw parameters used by iCKB can also be put in an Action object in the Message field of this particular OTX. This very Action object shall use iCKB type script’s hash as script_hash value. By definition of cobuild protocols, a cobuild-compatible lock will ensure that iCKB type script is included the full CKB transaction.
  4. The iCKB type script can also loop through all OTXs in current CKB transaction, extracting Action objects for iCKB, does necessary validations. The only difference, is that iCKB type script will not perform signature validation for each individual OTX. And this is one of the major point of OTX: locks should handle ownership, prevent malleability, whlie types does the real data checking.
3 Likes

I fully agree, but it has its own limitations too :raised_hands:

It came to my mind too but since it’s Omnilock and so it supports most form of locking and it will be maintained, I brushed it off. Then again fair point, thanks!! :pray:

Yup yup, this workflow works well for most protocols where user fully controls the underlying. Here it’s shared. Still let’s assume we use the CoBuild recommended way:

  • Both Deposits and Withdrawals are a two (or more) steps process.
  • Deposit have a per deposit and per transaction limits, so user with big amounts have to split before into smaller cells to avoid the partial fulfillment issue (and limit of 64 output cells) and later on authorize otx for each cell (~100K CKB per standard deposit).
  • User with small amounts have to group together to come close to a standard deposit size. They just authorize otx, aggregator does the grouping.
  • Let’s say we can manage Deposit step in your way, which is tricky for the second step, but maybe doable.

How to implement withdrawals?

  • Let’s assume exists an entity that is willing to lend some capital and take the role of aggregator.
  • All existing deposits are of very similar size, but not exactly same size.
  • Users who want to withdraw small amounts have to group together their funds to come close to withdraw a standard deposit size.
  • We are able to do the first withdrawal step with otx thanks to the aggregator who provides capital to avoid the otx partial fulfillment issue.
  • Now a single withdrawal request has possibly many users claiming a piece of it, possibly including the aggregator.
  • The owners of the Withdrawal Requests are stored in possibly many WR Receipts, one per users (or one per each user & Withdrawal Requests pair).
  • Each user take turns into claiming his capital from the WR cells using the WR receipt(s).

This show that going with CoBuild design is maybe possible (devil is in the details tho), but the number of user interactions is a little too high (especially for users with bigger capital who need to split their capital beforehand and authorize many otx) and the logic for grouping/splitting the deposits/withdrawals is a little too complex (consider final amounts depends on Header of the first transaction). Additionally now this logic lives in the main iCKB script.

(Keep in mind iCKB will adopt xUDT to make the script code super simple, so to cut the attack surface for possible exploits :sweat_smile:)

This CoBuild OTX logic could live instead in a second contract, which handles all grouping and splitting. This means one additional transaction. Then again this would likely defeat the CoBuild otx main point: using user partial signatures to assure the safety of transactions.

On the other side, we have a super simple limit order lock that is completely separated from iCKB and Omnilock, where the user can fully specify his intentions. Once the logic is sound it cannot be tampered with and and it’s partially fulfillable by design. Sure, this approach has its own limitations as it’s capital intensive for the aggregator in the iCKB to CKB step.

I have a proposal for iCKB, but it’s slightly outdated… Then again if you’d like to bless me with your valued opinion, I’ll update it just for you!! :hugs:

2 Likes

I fully agree, but it has its own limitations too :raised_hands:

True but every solution, including P2SH by type auth introduced here has its own limitation.

it’s Omnilock and so it supports most form of locking and it will be maintained

Personally I consider this is the reason for me to use omnilock, not the reason my app should target only omnilock. For future CKB apps, I would discourage designing upon one particular lock or type, no matter what quality or support one can get from that particular lock or type.

Regarding general deposit & withdraw designs, I’m afraid we are mixing different issues together:

  1. How to move from a custom-designed pattern to cobuild design.
  2. NervosDAO’s current 64-DAO-output-cell limitation adds trouble when splitting big deposit into smaller one, and when merging smaller withdraws into bigger one.

To me the 2nd issue still exists and requires solving even if we use a P2SH by type auth design. The issue is still there. A cobuild-compatible design, in my opinion, merely requires transforming certain data to a different location, it really does not affect the overall workflow.

Let me just say: I don’t really see how P2SH by type auth solves the 2nd issue, while cobuild can’t.

It might be the case that we are still talking in general texts that are quite misleading. If you want, there is no need for a detailed spec, but might be better to add a tx example or two showcasing how splitting deposits and merging withdrawals can be handled in a P2SH by type auth design, we can then see how it can be transformed to a cobuild design.

2 Likes

Actually it is pretty complex problem, independently of Cobuild OTX Actions, as there are even more subtle complications like:

  • Exchange rate between CKB and iCKB UDT is dynamic, depending on Header.
  • Amount of iCKB minted for a CKB deposit OTX depends on the aggregated deposit size, chosen by the aggregator…
  • NervosDao implementation is very much positional based.

The only way I ever saw to abstract over NervosDao and iCKB protocol limitations by using a cell lock that implements a limit order (abstracting the user intention) and that anyone can (partially) fulfill, similarly to an ACP. If you are interested, I can detail this one instead, so you can give me your valued opinion on its security.

So we create a general interface abstracting over the protocol details and a DApp will be easy to build on top of it.Then again now we need a bot with capital to do the actual interactions with the underlying iCKB protocol itself and fulfilling the orders. This bot usually requires no user interactions and for now it uses an unencrypted private key on .env file for facilitating operator management.

In the past I also wanted to design a separate direct iCKB DApp to bypass the limit orders and interact directly with the iCKB deposit pool in case of extreme scenarios. Then I realized that this would be a duplicate of the bot functionalities as the bot already has to handle operator ingress and egress. By turning the bot interface into a Wallet-enabled locally run DApp we can achieve the following:

  • Depending on the signature method it can be fully autonomous or require Wallet authorization.
  • Advanced users and operators are served by the same single entry-point.
  • Users can choose to become advanced users and bypass fees.
  • Users can choose to become operators after seeing how easy it is to use.
3 Likes

Please do provide a detailed design however it might work, we can then work from there for a cobuild-enabled variation

3 Likes

I’d like to thank you again for your time and support, I appreciate a lot!! Usually I discuss these topics with @jm9k and we agree on most decisions, so hearing your perspective (which usually differs from mine) would be beneficial for both iCKB and me :pray:

Current design

Let’s start with Limit Order then. After talking with @doitian, I adopted the following molecule template schema:

array Uint64 <byte; 8>;
array Hash32 <byte; 32>;

array Bytes<T> <byte; T>

struct lock<T> {
    codeHash: Hash32,     // 32 bytes
    hashType: byte,       // 1 byte
    args: Bytes<T>        // T bytes
}

struct OrderArgs<T> {
    userLock: lock<T>           // 33 + T bytes
    sudtHash: Hash32,           // 32 bytes
    isSudtToCkb: byte,          // 1 byte
    ckbMultiplier: Uint64,      // 8 bytes
    sudtMultiplier: Uint64,     // 8 bytes
    logMinFulfillment: number,  // 1 byte
}

union OrderArgs {
    OrderArgs<0>,
    OrderArgs<1>,
    ...
    OrderArgs<255>,
}

Currently the limit order script memorize all these parameters in its args, no data memorized anywhere else. The script validates that each interaction with a limit order cell is either a valid (partial) fulfillment or a cancel action.

The contract is positional based, so if at input index i there is a cell with limit order lock, the output cell at index i is either a limit order (similarly to NervosDao withdrawal request) or the complete fulfilled cell with user lock.

Also there is the possibility to cancel a limit order (at index i) by signature delegation (in input must exist a cell with user lock) and the output cell at index i is a cell with user lock.

Pitfall of transition from limit order to user lock without a signature

My constant fear with this design is that it may exists another script, let’s call it cooler limit order, with slightly different validation rules. A user may use both at the same time, place a order in both and unknowingly both orders have compatible completely fulfilled cells. (These cells have user lock for improved user experience as they effectively remove one transaction for the user.) The attack vector is the following: an attacker can completely fulfill both limit order and cooler limit order with the same output cell (so both scripts validate) and be able to steal the rest. This is a remote possibility, still it is troubling.

Ideas for a CoBuild OTX design

  1. Due to the possible attack, we could ask that the completely fulfilled cell step happens only with user signature delegation, same as with cancelling. If necessary we could also memorize user lock as Hash32 and this would also avoid having a dynamic length field.

  2. Since CoBuild OTX is definitely not positional friendly (or better its positional to the OTX), the output cell must be able to reference the input cell in other ways. For example the action could store both the input outpoint of the cell is matching against and a reference to the output cell. An alternative is to define a synthetic ID that identify that particular limit order. Then again I see no way to guarantee its unicity. It could be unique in the OTX, but it may still create issues. Another possibility could be to keep the positional approach but relative to the OTX.

  3. The signature delegation in a CoBuild OTX Order would need to check that input user cell exists in the very same OTX. Same tx is not enough to assure safety as it could be attacked easily.

  4. We don’t need to require for the corresponding output of fulfilled cell or cancelled cell to have user lock. Same as when completing the withdrawal from a NervosDao, it can be right away used for anything the user decides. This should be safe since the OTX contains user signature.

  5. Since now the action Cancel and action Withdraw Fulfilled have the very same logic, they can be defined as Melt, so there are only three CoBuild Actions: Mint, Fulfill and Melt. Only Melt requires delegated user signature, while Fulfill is exclusively validated by Order logic.

  6. Limit Order will support both sUDT and xUDT.

  7. All data could be memorized into the CoBuild witnesses’s action and lock script may only need to memorize the hash of this data as script args or XudtData lock for next fulfillment round. In case of sUDT the very same XudtData convention can be used.

  8. If we impose the rule for all limit orders to have already the correct UDT Type (so not undefined) at Mint time, then it could be possible to drop the sudtHash field from Args. Then again if we follow the design of 7 it could still be required in Action.

  9. Another idea concurrent to 7 (memorize only hash of data) is to store limit order data into a secondary cell, with limit order type and user lock. This cell is then referenced as CellDep and its outpoint is the synthetic ID needed to identify the order. The limit order lock script could contain either directly the outpoint (as args or XudtData) or just referece its position in the otx CellDeps (so the weaker unique ID in OTX). This design also improves on the security of Melt as it requires a specific user cell (this secondary cell) to be consumed in the same OTX of the limit order Melt. Then again this solution would require an additional disbursing of CKB for the secondary cell and for referencing it from the limit order cell.

All in All, probably a variation of 9 would be the most future proof design I can think of, then again 7 would have a smaller CKB footprint.

2 Likes

I must say that I still don’t see a detailed design here. We don’t need a fully compete proposal to start talking, but I would personally expect something like the spore examples put together, so you can see what a complete CKB transaction with iCKB interactions might be. Unfortunately what we have here is just a snippet of a small data structure, I must say I still don’t have a picture on the full iCKB design to comment if it is viable.

Instead, I’m gonna do a slightly different way: I will put together a small design if I’m doing a liquidable Nervos DAO UDT in the context of cobuild. I will first list a few assumptions I will use in my UDT, then come up with a design from the assumptions. It’s possible that my assumptions here will be different from iCKB’s assumptions, hence leading to different designs. But with the information I have at hand, this is the best I can do.

So without further ado, here are the assumptions I will use:

  • A special UDT token will be introduced. I’m gonna name it xDAOCKB. xDAOCKB can be minted when a Nervos DAO deposit into a special lock address is included in the same CKB transaction. One can only mint as many as xDAOCKB as the DAO deposit.
  • Smaller deposits and bigger withdraws are supported by xDAOCKB, but as we shall see later, they will be implemented via Cobuild OTXs, not as Nervos DAO deposits / withdraws directly.
  • xDAOCKB can be freely transferred to any cobuild-compatible accounts(this has a potential BIG issue we will address later).
  • Any xDAOCKB owners can put up limit orders selling xDAOCKB in exchange for (could be slightly smaller than typical DAO withdrawals to give liquidity provider incentives) corresponding CKBs with some interests. A limit order will also be in the form of Cobuild OTX.
  • A liquidity providers can fully / partially fulfill limit orders in the form of Cobuild OTX, buying and burning xDAOCKBs from limit orders, providing limit order creators with specified CKBs, and initialize CKB withdraw requests in the same CKB transaction.

I personally believe the above assumptions already make a decent liquidable Nervos DAO app but if you have a specific requirement you want in mind, feel free to add via more comments.

And there is also one big issue with xDAOCKB that could potential lead to drastically different design: not all xDAOCKB tokens are created equal. Assuming that 1000 xDAOCKBs are issued at block 10000 with DAO deposit cell A, but 2000 xDAOCKBs are then issued at block 20000 with DAO deposit cell B, what if people combine the xDAOCKBs issued at block 20000 with DAO deposit cell A for withdrawals? This would create an unfair situation. Here I’m gonna talk about 2 different designs, they all have their own pros & cons. Different people would favor different one, but as protocol designers, I would simply list both designs without preferences.

  1. We add a validation rule to xDAOCKB: the creation order of Nervos DAO deposit cells will be maintained by xDAOCKB, so that a liquidity provider can only provide the earliest Nervos DAO deposit cell to withdraw from. The market will then adjust the limit order price, so a reasonable balance between Nervos DAO interest and the buy-out price can be reached.
  2. Take inspiration from Bitcoin Inscription and Spore: what if we treat each issuance of xDAOCKB as a NFT, not UDTs? Different xDAOCKB are issued at different time for different Nervos DAO deposits. What if we treat the whole issuance of xDAOCKB as an NFT, and only allowing the transfer of a batch of xDAOCKBs as a group? That will essentially simply withdrawing by a large margin.

Again, let me stress the point that it is not my position nor my interest to debate which of the above 2 designs are a better one, I’m just saying I see 2 solutions here, both work from a technical perspective, and I will explain both here.

Ordering withdrawals

Let’s first talk about the case where we can treat xDAOCKB as true UDTs and can be freely transferred in any amount.

For xDAOCKB to work on chain, there are 3 different types of cells involved:

  • xDAOCKB entity cell: a unique(could be ensured via type id) entity cell exists on chain for a particular deployment of the xDAOCKB app. It contains certain metadata information(such as the minimal CKBytes per Nervos CKB deposit batch), or bookkeeping information(all Nervos DAO deposits into xDAOCKB app). This cell will use an always-success script as the lock script, and a xDAOCKB app script as the type script. Note that xDAOCKB app script does the majority of the validation work of xDAOCKB.
  • xDAOCKB asset cell: a series of cells used to hold CKBytes deposited into xDAOCKB app. A xDAOCKB app might have a series of xDAOCKB asset cells depending on actual usage. A xDAOCKB asset cell will use a xDAOCKB ensuring script as the lock script, and the Nervos DAO type script as the type script. The xDAOCKB ensuring script only does one thing: it validates that current CKB transaction has a cell using xDAOCKB app script as the type script
  • xDAOCKB cell: this just refers to cells containing xDAOCKB UDTs. Per UDT’s design, a xDAOCKB cell can use any lock script as the lock, it uses xDAOCKB UDT script as the type script. There can be as many xDAOCKB cells as possible on chain, one is also free to split xDAOCKB cell into multiple xDAOCKB cells each holding a portion of xDAOCKB UDTs. I believe we can simply use xDAOCKB app script as the owner lock for xDAOCKB UDT script.

Deposit OTX

Assuming Alice wants to deposit 1100 CKB into xDAOCKB app, Alice can create an OTX like the following:

inputs:
  input 0:
    capacity: 5000 CKB
    lock: Any OTX-compatible lock of Alice's (let's call it lock A)
    type: <EMPTY>
    data: <EMPTY>
outputs:
  output 0:
    capacity: 3499.99 CKB
    lock: Any lock of Alice's
    type: <EMPTY>
    data: <EMPTY>
  output 1:
    capacity: 400 CKB
    lock: Any lock of Alice's
    type: xDAOCKB UDT script
    data: 1100 xDAOCKB
witnesses:
  witness 0: WitnessLayout format, Otx variant
    seals:
      0: Seal format
        script_hash: lock A's hash
        seal: signature for lock A
    input_cells: 1
    output_cells: 2
    message: Message format
      actions:
        0: Action format
          script_info_hash: xDAOCKB UDT ScriptInfo's hash
          script_hash: xDAOCKB UDT script's hash
          data: cobuild message for issuing 1100 xDAOCKB
        1: Action format
          script_info_hash: xDAOCKB app ScriptInfo's hash
          script_hash: xDAOCKB app script's hash
          data: cobuild message for depositing 1100 CKB

Several interesting points of this OTX:

  • It does not use Nervos DAO script at all. There is no deposit cells. There is only a cell for issuing 1100 xDAOCKB. So there is no concern about Nervos DAO’s script limits, a user can deposit any amount of CKBytes as he / she wants.
  • The user is free to use any lock as he / she wants. The input cell must use an OTX-compatible lock since this is an otx, the output cells are free to use any lock, even non-OTX-compatible ones.
  • The user is in charge of providing capacity cost for the xDAOCKB cells(in reality this can change), the user also provides 0.01 CKBytes as incentives for OTX processors to process this OTX
  • A signature is included in seals to ensure no one can alter this OTX
  • The cobuild message part has 2 Action objects: one for issuing UDT script, one containing the action for xDAOCKB app

Merging Deposit OTXs into CKB Tx

With a few gathered deposit OTXs, an OTX processor can then merge then into one CKB transaction, creating a single Nervos DAO deposit request:

inputs:
  input 0(otx #0):
    capacity: 5000 CKB
    lock: Any OTX-compatible lock of Alice's (let's call it lock A)
    type: <EMPTY>
    data: <EMPTY>
  input 1(otx #1):
    capacity: 3000 CKB
    lock: Any OTX-compatible lock of Bob's (let's call it lock B)
    type: <EMPTY>
    data: <EMPTY>
  input 2:
    capacity: xDAOCKB app capacity
    lock: always-success lock
    type: xDAOCKB app script
    data: xDAOCKB app data
outputs:
  output 0(otx #0):
    capacity: 3499.99 CKB
    lock: Any lock of Alice's
    type: <EMPTY>
    data: <EMPTY>
  output 1(otx #0):
    capacity: 400 CKB
    lock: Any lock of Alice's
    type: xDAOCKB UDT script
    data: 1100 xDAOCKB
  output 2(otx #1):
    capacity: 499.99 CKB
    lock: Any lock of Bob's
    type: xDAOCKB UDT script
    data: 2500 xDAOCKB
  output 3:
    capacity: 3600 CKB
    lock: xDAOCKB ensuring script
    type: Nervos DAO script
    data: Nervos DAO data
  output 4:
    capacity: updated xDAOCKB app capacity
    lock: always-success lock
    type: xDAOCKB app script
    data: updated xDAOCKB app data
witnesses:
  witness 0: Witness for Otx #0
  witness 1: Witness for Otx #1

This CKB transaction merges 2 OTXs together: Alice deposits 1100 CKB to xDAOCKB via OTX #0, Bob deposits 2500 CKB to xDAOCKB via OTX #1. But as the CKB Tx shows, no matter how many deposit OTXs there are, there is only one Nervos DAO deposit cell created(output 3 above), it combines all the individual deposits from each OTX together into a single Nervos DAO deposits. xDAOCKB entity cell(input 2 & output 4) is also included in the CKB transaction, so xDAOCKB app script can perform all the checking needed, and update all the bookkeeping information(current Nervos DAO deposit information must also be included in xDAOCKB app data for maintaining deposit ordering). Also sice xDAOCKB app script is included in current Tx, xDAOCKB ensuring script can succeed in the execution. xDAOCKB UDT script can also successfully issuing xDAOCKBs to Alice and Bob.

Withdraw OTX via a limit order

With xDAOCKB, there is no direct withdrawing from Nervos DAO, instead a withdraw request is handled as a limit order:

inputs:
  input 0:
    capacity: 400 CKB
    lock: Any OTX-compatible lock of Alice's (let's call it lock A)
    type: xDAOCKB UDT script
    data: 1100 xDAOCKB
outputs:
witnesses:
  witness 0: WitnessLayout format, Otx variant
    seals:
      0: Seal format
        script_hash: lock A's hash
        seal: signature for lock A
    input_cells: 1
    output_cells: 0
    message: Message format
      actions:
        0: Action format
          script_info_hash: xDAOCKB UDT ScriptInfo's hash
          script_hash: xDAOCKB UDT script's hash
          data: cobuild message for burning (at most) 1100 xDAOCKB
        1: Action format
          script_info_hash: xDAOCKB app ScriptInfo's hash
          script_hash: xDAOCKB app script's hash
          data: cobuild message for withdrawing limit order
            ask_interest: 0.00005 CKB
            partial_fulfill_enabled: true

Like the deposit OTX, the withdraw OTX does not contain any Nervos DAO cells. It merely provides an input cell providing xDAOCKB, and an Action object(at action #1) containing limit order information. Of course different app would want to use different parameter formats as limit order information, here I merely include the absolutely necessary part:

  • ask_interest: for each xDAOCKB sold by the limit order, we ask for 1.00005 CKB
  • partial_fulfill_enabled: whether you can partially fulfill the order, or you can only fulfill the complete order.

The total tradable xDAOCKB, is thus read from the OTX itself. In the above OTX, Alice wants to trade at most 1100 xDAOCKB for CKB. But if Alice only wants to trade less xDAOCKB, a slightly different OTX can be used:

inputs:
  input 0:
    capacity: 400 CKB
    lock: Any OTX-compatible lock of Alice's (let's call it lock A)
    type: xDAOCKB UDT script
    data: 1100 xDAOCKB
outputs:
    capacity: 400 CKB
    lock: Any lock of Alice's
    type: xDAOCKB UDT script
    data: 600 xDAOCKB
witnesses:
  witness 0: WitnessLayout format, Otx variant
    seals:
      0: Seal format
        script_hash: lock A's hash
        seal: signature for lock A
    input_cells: 1
    output_cells: 1
    message: Message format
      actions:
        0: Action format
          script_info_hash: xDAOCKB UDT ScriptInfo's hash
          script_hash: xDAOCKB UDT script's hash
          data: cobuild message for burning (at most) 1100 xDAOCKB
        1: Action format
          script_info_hash: xDAOCKB app ScriptInfo's hash
          script_hash: xDAOCKB app script's hash
          data: cobuild message for withdrawing limit order
            ask_interest: 0.00005 CKB
            partial_fulfill_enabled: true

In this OTX, Alice only wants to trade 500 xDAOCKB.

Merging Withdraw OTXs into CKB Tx

In xDAOCKB, Liquidity providers act as OTX processors, they monitor withdraw OTXs submitted by users, and when they see orders they want to fulfill, they can merge those withdraw OTXs into a proper CKB Tx, like the following:

inputs:
  input 0(otx #0):
    capacity: 400 CKB
    lock: Any OTX-compatible lock of Alice's (let's call it lock A)
    type: xDAOCKB UDT script
    data: 1100 xDAOCKB
  input 1(otx #1):
    capacity: 400 CKB
    lock: Any OTX-compatible lock of Bob's (let's call it lock B)
    type: xDAOCKB UDT script
    data: 3000 xDAOCKB
  input 2:
    capacity: 10000 CKB
    lock: xDAOCKB ensuring script
    type: Nervos DAO type script
    data: Nervos DAO data
  input 3:
    capacity: xDAOCKB app capacity
    lock: always-success lock
    type: xDAOCKB app script
    data: xDAOCKB app data
  any required cells from LP to make CKBytes balanced
outputs:
  output 0(otx #1):
    capacity: 400 CKB
    lock: Any lock of Bob's
    type: xDAOCKB UDT script
    data: 1000 xDAOCKB
  output 1(fulfilling otx #0):
    capacity: 1100.1 CKB
    lock: Recipient lock specified by Alice
    type: <EMPTY>
    data: <EMPTY>
  output 2(partially fulfilling otx #1):
    capacity: 1000.1 CKB
    lock: Recipient lock specified by Bob
    type: xDAOCKB UDT script
    data: 1000 xDAOCKB
  output 3:
    capacity: 7900 CKB
    lock: xDAOCKB ensuring script
    type: Nervos DAO type script
    data: Nervos DAO data
  output 4:
    capacity: updated xDAOCKB app capacity
    lock: always-success lock
    type: xDAOCKB app script
    data: updated xDAOCKB app data
  output 5:
    capacity: 10000 CKB
    lock: Any lock of LP's
    type: Nervos DAO type script
    data: Nervos DAO data for withdrawing cell
  any changed cells to LP

In the above transaction, we have completely fulfill OTX #0(1100.1 CKB for 1000 xDAOCKB), while partially fulfill OTX #1(1000.1 CKB for 1000 xDAOCKB, while Bob wants to trade for 2000 xDAOCKB). Even though there are multiple limit orders, we are only doing withdrawing from one Nervos DAO deposit cell(input 2). For security reason, if burned xDAOCKB is less than CKBytes in withdrawed Nervos DAO cell, a new Nervos DAO deposit cell must be created(output 4 here). xDAOCKB entity cell is also included(input 3 & output 4) which does the bulk validation and bookkeeping work.

Recall the previous dicussion that xDAOCKB issued at different time is not equal, some might be issued from DAO deposit cells that exist longer than others, hence bearing more interested. To prevent attackers from exploiting this behavior, xDAOCKB entity cell will validate that the withdrawed Nervos DAO cell(input 2) is the oldest DAO deposit cell for current xDAOCKB app.

Apart from displayed cells, above, an LP might need to include other cells in the transaction as well, so as to provide CKBytes for fulfilled limit orders, as well as L1 transaction fees. The LP can then claim the withdrawed Nervos DAO cell(output 5), and gather the locked CKBytes with interests after a certain lock period.

Till this point, we’ve designed a liquidable Nervos DAO app that supports small & large deposits / withdraws, is not affected by Nervos DAO’s script limitation, and also supports complete / partial fulfillment of limit orders. Such an app is also cobuild-compatible, and is designed to leverage cobuild’s protocol from ground up.

NFT-style xDAOCKB

NFT-style xDAOCKB is similar to the above, but would be different in certain points:

  • The issued xDAOCKB must keep the block number of the DAO deposit cell, in its cell’s data storage. This can be done in several ways:
    • A 2-phase design can be used: a first transaction deposits CKBytes into Nervos DAO, and a follow-up transaction issues the xDAOCKB tokens
    • A proximation can also be used: we can set the since value of one input in the transaction, then use block number / epoch number contained in since value as the block number used for issuing xDAOCKB tokens, this would ensure that the DAO deposit cell can only be committed at or after the since value. Also providing security guarentees.
  • A cell containing xDAOCKB can only be tranferred as a single unit, there is no way to break an xDAOCKB cell into multiple cells, each containing a part of xDAOCKB tokens
  • There is no need to maintain the order of deposited DAO cells in xDAOCKB entity cell
  • When building withdrawing CKB transaction, xDAOCKB app script must check that for each consumed xDAOCKB cell, the included block number in xDAOCKB cell, must be no bigger than the block number where withdrawing DAO cell is committed.

This way we are limiting xDAOCKB cells so they can only earn interests starting from the point they are first issued, but not before that particular time point.

Recap

Here we have shown that how to build a liquidable Nervos DAO app in the context of cobuild protocol. What’s best about this protocol, is that we are not hardcoding against any particular lock script, one is free to use any lock to interact with xDAOCKB, such as secp256k1-sighash-all included in genesis block, omnilock, Unipass, dotbit(I’m not 100% on the dotbit part, since last time I checked dotbit still uses a lot of type scripts, and this app requires a wallet to only use lock script), Joy ID, etc. Hopefully this can shed some light on future CKB app development.

6 Likes

You know @xxuejie, I feel that we are doing double work here!! :rofl::rofl::rofl:

Point taken, I updated the iCKB proposal to a xUDT design. Let me know if I should cross post it here! :hugs:

Hmmm, trouble ahead

Exactly!! This is the underlying issue, this design is more similar to dCKB than iCKB. It could easily be implemented by using a custom lock for new dCKB deposits receipts. On the other side, iCKB basic idea is to account all deposits in a fair way. It’s a bit more involved since it requires accessing the header, but it’s well worth it.

This requires a global state update to account every deposit. Also crucially increases illiquidity as increase the capital requirement for liquidity providers as the oldest deposit on average is ~90 epochs away from Claim time (assuming uniform probability distribution), while in the worst case it could be ~180 epochs away :exploding_head:

In iCKB every deposit can be freely withdrawn from, especially those whose Claim time is close.

This protocol is vulnerable to DoS. Let’s assume an attacker has a big enough capital (in the order of GB), he can withdraw from small deposits and redeposit in deposits as big as the entirety of his capital. Doing this for all xDAOCKB controlled deposits will hamper the fruition of xDAOCKB as it’s impossible to withdraw partially from a NervosDAO deposit.

To avoid this form of DoS it is necessary to put a cap on the deposit size.

This is also the reason why iCKB poses a soft limit on the deposit size. Exceeding this limit can cause up to 10% loss. So this attack is prevented by slashing the attacker capital.

My approach instead obtains a similar results with a value high enough of logMinFulfillment also allows higher expressiveness.

About partial fulfillment, our partial fulfillment interpretations have crucial differences.

In my interpretation user funds are locked with a custom lock that validates if a partial fulfillment is valid, no user signature involved.

In your example every time the order is partially fulfilled, the user must sign another OTX. This could improve security (to prove this point you should provide an example where my implementation would inevitably fail) at the expenses of User Experience. Then again you also must include additional logic for validating a partial fulfillment, which would have pitfalls similar to mine.

I’d say that if an order is small enough to be fully fulfilled (no partial fulfillment), your interpretation for limit orders has better user experience, while for amounts too big mine is better.

Now I’m thinking, since your limit order interpretation works really well for small amounts, I could make the user interact directly with the protocol until their remaining assets to convert are less than one standard deposit (or few standard deposits). At that point I could use your limit order approach (without partial fulfillment). This approach has a user experience similar to your design after the necessary Dos mitigation measures.

By the way, this flow is very similar to the bot use-case I was proposing before, see:

Except this time a CoBuild limit order is used.

This could have some state contention issues at withdrawal time, then again I could let the user make an informed choice between creating a limit order and interacting directly with the protocol, so it should be fine.

By the way, how does the aggregator/bot collect these otx? Is there any standardized way?

Not really my cup of tea, but it could be promising. In Spore you can add additional CKB to a NFT for increasing its intrinsic value and it can be redeemed with Melt. It could be possible to integrate a deposit within Spore itself as a secondary cell controlled by the Spore NFT.

Hmmm, I see your point now. I’d say it kind of defeat the purpose of creating this protocol. It has much better value in my erroneous interpretation as Spore + NervosDAO!! :sweat_smile:

3 Likes

You know @xxuejie, I feel that we are doing double work here!! :rofl::rofl::rofl:

Rest assured, I have no interest in building a similar interest. It’s just that communication has been so inefficient that I can only squeeze the design of iCKB from your reply bit by bit. So I figured I wanted to do it a different way: I can spend a couple of hours piecing an alternate design. Maybe you have other concerns putting together a complete design, I get it, but by presenting an alternative design, it will be easier for me to learn how iCKB plans to design things. And it really seems my goal is achieved here.

This requires a global state update to account every deposit.

Global state contention is a known artifact in Cobuild OTX. All OTX processors, regardless of the actual use case of OTXs, will have a level of shared state contention involved. There are 2 points here:

  • At user level, constructing OTXs is free from shared state contention. Anyone is free to construct as many OTXs as he / she wishes without worrying about shared state. And that is really the point.
  • OTX processors will indeed compete for shared cells, but OTX processors are really programs, in case of contention, they can simply rebuild a new CKB tx containing updated shared cells. There is no real issue here. In addition, it has been in the discussion that miners are particularly suitable to act as OTX processors as well, since they will know the details of the blocks they are producing, and are free from shared cell contentions.

Also crucially increases illiquidity as increase the capital requirement for liquidity providers

This is really just a balance between the the liquidity required for LP, and the minimum CKBytes required by a deposit batch so as Nervos DAO’s 64 cell limit ceases to become a problem.To me it is nothing related to cobuild, but one parameter we can tune for better usability. You can use a smaller limit for batch deposit, in exchange for limited number of deposits processed per CKB tx(but you always have the option to use more CKB txes, again it’s just how you want to balance things).

This protocol is vulnerable to DoS. Let’s assume an attacker has a big enough capital (in the order of GB), he can withdraw from small deposits and redeposit in deposits as big as the entirety of his capital. Doing this for all xDAOCKB controlled deposits will hamper the fruition of xDAOCKB as it’s impossible to withdraw partially from a NervosDAO deposit.

To avoid this form of DoS it is necessary to put a cap on the deposit size.

This is also the reason why iCKB poses a soft limit on the deposit size. Exceeding this limit can cause up to 10% loss. So this attack is prevented by slashing the attacker capital.

Yes by all means, a limit on deposit size can be used indeed. And this is why I listed the assumptions above, we can add a new assumption here like a cap on deposit size, and it’s easy to adjust the design.

In your example every time the order is partially fulfilled, the user must sign another OTX.

Actually both is doable. A user just signs an OTX saying that he / she wants to sell, say 2000 xDAOCKB, an LP in the system can also issue an OTX saying I will pay for 1500 xDAOCKB for 1500.1 CKB, and yet another LP issues a second OTX saying he / she will pay 500 xDAOCKB for 500.05 CKB. An OTX processor can then pack all 3 OTXs together achieving the same goal. And that is really how Cobulid OTX is designed in mind.

Now I’m thinking, since your limit order interpretation works really well for small amounts, I could make the user interact directly with the protocol until their remaining assets to convert are less than one standard deposit (or few standard deposits). At that point I could use your limit order approach (without partial fulfillment). This approach has a user experience similar to your design after the necessary Dos mitigation measures.

That is the goal. My hope is that by comparing designs (well this has to be done in your mind) and absorbing ideas, an improved one with cobuild in mind can be achieved.

Basically I now get that iCKB uses a third solution, which ties the exchange rate to CKB’s secondary issuance rate, but I feel like it is just a third solution to solving the different deposit time issue. It is possible to combine the cobuild-OTX-based design here with iCKB’s solution to exchange rate. To me that will be an even better solution. And if I’m designing iCKB, I will personally go with this way.

4 Likes

Hey @xxuejie, hope everything is going well, I noticed that you are reviewing xUDT RFC PR and fighting to improve its readability. That’s very appreciated, keep it up!! :muscle:

This is an advanced POC, I was wondering how much time could it take to have a production ready CoBuild Rust Library?

CoBuild OTX Collector, Missing

In these days I started to appreciate more and more CoBuild OTX, so I started looking how to integrate CoBuild OTX-style limit orders into the iCKB flow. Then again, I realized that a key Service is missing for integrating CoBuild OTX into iCKB, let’s call it OTX Collector.

A CoBuild OTX Collector would be defined by the following functionalities:

  • Buffer the received messages, while being resistant to DoS.

  • Check that these messages can be parsed as CoBuild OTX, so shielding aggregators against malicious XSS messages.

  • Organize the CoBuild OTX in Topics.

  • As noted by @xxuejie, avoid validating the received CoBuild OTX as it’s a hard problem. (Validation could be achieved by the specific aggregator retrying enough times aggregated transactions, excluding the non validating OTX little by little.)

  • Make publicly available for aggregators the received CoBuild OTX.

  • Possibly be organized as a P2P network for high availability and for resisting DDoS attacks.

No Cobuild OTX Collector is currently available and I can’t afford to create, deploy and maintain it as I don’t have the required mental power.

iCKB Staggered Adoption of CoBuild

At this point, while in my opinion CoBuild OTX is solid, its integration in iCKB is not 100% workable right away. That’s why we are taking a staggered approach for iCKB. In V1:

  1. I’ll complete the proposal update the definition for all scripts in a way that is both L1 Classic and CoBuild OTX compatible. Then I’ll kindly ask @xxuejie for a review.

  2. I’ll implement these script in L1 Classic and, if CoBuild Rust Libraries are production ready, possibly CoBuild (OTX).

  3. Deploy them on Testnet.

  4. Finishing implementing the DApp as L1 Classic.

  5. Launch on Mainnet.

When CoBuild back-end and front-end libraries are production ready, in V2:

  1. Update the scripts to validate Cobuild (OTX) messages and re-deploy, if not already fully CoBuild OTX.

  2. Update the DApp to support CoBuild and possibly CoBuild OTX-style limit orders if a CoBuild OTX Collector-alike service is available by then.

2 Likes

This is an advanced POC, I was wondering how much time could it take to have a production ready CoBuild Rust Library?

This is in fact the CoBuild Rust library that is expected to be used in Rust contracts. The repo name is slightly misleading. See ckb-transaction-cobuild-poc/contracts at main · cryptape/ckb-transaction-cobuild-poc · GitHub for actual contracts using the library.

Then again, I realized that a key Service is missing for integrating CoBuild OTX into iCKB

Yes I do agree that OTX still requires an off-chain part so as to function propertly. Unfortunately this is still in planning phase, I will see how I can help here.

avoid validating the received CoBuild OTX as it’s a hard problem.

Clarification: I never say that we should not validate a received OTX, I just say it is a hard problem to validate any generic OTX. It is more feasible for an OTX collector/processor/agent(depending what wording you use here, they really all refer to the off-chain module for collecting and packing OTXs into L1 TXs) to deal with a certain type of OTX, fulfilling a certain type of app on CKB. In the process of working with OTXs, a collector of course can perform certain validation work.

And the validation rule, at least in the beginning, does not have to be very complicated. For a similar work: Swap Transfer Rules | Atomicals Guidebook this can be an example here. We can have an OTX collector to only collect OTXs that have a certain structure(e.g., the first input cell should have no type script, the second output cell should use a particular type script, etc.). And it could already provide enough logic to piece together an OTX collector for, say, iCKB. Gradually as more requirements come we can then add more power to the collector, but my point is, you don’t really need a beefy, full featured OTX collector to get started.

And the OTX collector can also started as a single node without any P2P functionality, it could also say like I will keep at most 20000 OTXs, any more OTXs than that will be discarded. Such an OTX collector will already provide necessary functionality, while being somewhat DDoS resistant. Of course a P2P network will be nice to have later, but to get started, we don’t really need something huge.

In an ideal world where OTXs are fully utilized on CKB, we can have more sophisticated OTX collectors/processors/agents with advanced P2P network as well as all other fancy features, but my point really is we can do this in incremental steps, we don’t need to have everything ready at day one.

4 Likes

That’s good to hear, I didn’t really like the idea of having to update the contracts right after deploy! Would it be possible to export ckb-transaction-cobuild as a crate?

(In a way that it can be installed with something like cargo add ckb-transaction-cobuild without specifying repo and folder)

Perfect!! This is super important for CoBuild OTX adoption :+1::+1:

This makes more sense!! Also since the Collector is already at this level, maybe it’s doable to also filter out OTX that are already spent by keeping an eye on L1 spent cells.

3 Likes

I will leave this question to @quake and @xjd for more insights.

2 Likes