Approach to Designing a User Defined Token Standard on CKB: Part 1

Designing a User Defined Token Standard Part 1

Introduction

I’ve split up the post I had on UDT design into logical parts/steps to make it more manageable to read, easier facilitate discussion, and make it easier to pivot when needed (perhaps post 1 is in the right direction, but what I planned for post 2 is not as relevant as I thought. It’s easier to change direction this way).

This post builds on the questions and possible approaches published to Nervos Talk earlier this week, titled“Discussion on UDT Standard Evaluation Criter[ia]”

In this first post, I talk about the types of users that will engage with the UDT interface, the level of abstraction at which the interface should exist, the operations and queries we should support, and why our standard will require that we specify a certain architecture and set of required cells as opposed to simply a functional API like ERC20 does. I then briefly discuss some architectural decisions and provide a preliminary, informal and high level description of the cells that will be required for supporting UDT operations and queries.

Abstraction & Programming Interface

The CKB programming model is far different from most other smart contract platforms in that state generation does not occur on-chain whatsoever; contracts are merely scripts that verify whether the transaction which caused their execution is valid or invalid. There are two implications of this fact that are notable for the design of the user-defined token (UDT) standard.

Implication 1: Query Interface → standardized data location
The first implication is that contracts do not provide a request-response or event-based interface that is callable by external parties. The contracts do not return queried information on behalf of the querier nor do they produce on-chain side effects on behalf of the transactor. This means that a query interface for smart contracts on CKB doesn’t make sense. The ability to query for token information is inarguably crucial, so that raises the question: if not a smart-contract-provided API, then what? Queriers have access to an RPC API provided by nodes which can access chain data so long as the arguments provided to these RPCs informs the API server where to look. Therefore, standardizing smart-contract logic itself is necessary but not insufficient supporting UDT queries on CKB: we have to standardize where crucial UDT data is stored so that services can easily look up that data.

To standardize where UDT data is stored, we have to standardize a minimum necessary architecture composed of cells. We should keep in mind during this design process that the standard should be extensible to fit a variety of use cases. In a standard such as ERC20, the functional signature and expected return values of smart contracts is all that’s specified, leaving implementation details almost entirely at developers’ discretion. In contrast, the CKB-UDT standard requires that we specify a foundational architecture (and therefore certain implementation details) at the very least, constraining how developers can architect their custom token. Flexibility and extensibility are values that Nervos Network highly regards & so we should take great care to not sacrifice these despite the necessary specification of basic architectural design.

Implication 2: Programming Interface → Transaction Rulesets
The second implication is that smart contracts on CKB, although attached to cells, are actually transaction-level rulesets. The contracts - referred to from now on as “scripts” - execute within the context of transactions. They have access to all information within the transaction as opposed to being limited to the cell to which they are attached. A transaction, of course, can have many scripts within its various cells. So, a script can be thought of as a particular subset of the entire ruleset of a transaction, where the ruleset is the union over the rules of each script. Transactions themselves, not scripts, produce state changes on the network.

When a developer wants to deploy custom programmatic behavior on Ethereum, their focus is on the behavior of the smart contracts. On CKB, it makes more sense to focus on a transaction - which describes a set of state changes or behaviors - and then develop scripts that enforce the rules of the transaction. In other words, while many smart contract platforms focus on smart contracts to implement custom programmatic behavior, CKB takes a transaction-first approach. The design of the transaction is what is important; the scripts are merely there to ensure that the rules of the transaction design are followed. So, while a custom token standard on a typical smart contract platform would define a programmatic interface for smart contracts where different contract functions are associated with token operations, a custom token standard on CKB would define a transaction-first interface where each transaction structure is associated with a specific UDT operation.

Requirements for UDT Standard

The first step in designing the CKB-UDT standard is to understand who will be using it & why. This will inform the operations & queries we wish to support. The operations will be mapped to transaction rulesets while the queries will be mapped to standardized data locations, sizes, and formats. Since transactions are bound by rulesets which enforce both structural and content-based rules on transactions, specifying transaction rulesets entails specifying a minimum necessary architecture as well. As I mentioned in the above section, specifying a standardized architecture is unavoidable due to CKB’s programming model.

Users

The users will primarily be DApp developers, Wallets, Exchanges & other services that perform state generation or querying on behalf of end users. I didn’t mention holders here, nor did I mention the users of DApps, because both the former & latter will be using the UDT through some other interface. So, really, only the developers of tokens or DApps, and off-chain software services, will be interacting with CKB-UDTs directly.

Users fall into one or more of three different user-types: developers, generators, and queriers. Developers in this context are those who are working on some custom token directly. Generators are any external service that submits UDT-specific transactions and queriers are any external service that queries the chain for UDT-specific information.

Before going into the actual queries & state changes the standard should support, it’s important to understand the overall system constraints that will affect any particular specification Nervos Network comes up with.

Cost Constraints

  • Computation Cost of script execution for UDT-specific transaction
  • Capacity cost of storing UDT-specific state on chain
  • Transaction Cost in terms of how many transactions are required to perform a single operation as well as the size of each transaction which affects fees

Usability Constraints

  • Extensibility for developers to build on top of and customize their token while remaining standard-compliant
  • Query complexity for queriers to gather and parse relevant information including gathering the information that needs to be included in a transaction
  • UDT Adoption in terms of how much work new services that wish to add support for a new UDT needs to undergo

The operations and queries we support in the standard should ultimately be sufficient for a variety of different use cases including customizing the monetary policy, customizing the issuance scheme for offerings, and interoperability with other DApps.

Note: due to the programming model of CKB, the script enforcing rules on UDT-specific transactions are decoupled from actual tokens held by users. The actual tokens held by users within cells are what I will refer to as “UDT-Instances” whereas the collection of metadata and scripts responsible for storing system-level information & enforcing transaction rulesets are what I will refer to collectively as a “UDT-Definition”. More on this later.

Queries & Operations

Queries

  • get balance of address for udt
  • get udt metadata
  • get dependencies for udt transaction
  • get approved spenders of udt instance
  • get approved updaters of udt definition
  • get approved minters of udt instance from definition
  • gather udt instances for address

Operations (state changes)

  • transfer udt instance(s)
  • approve spender for udt instance(s)
  • burn udt instance(s)
  • mint new udt instance(s)
  • update udt definition
  • create new udt definition with initial supply of udt instances

Architecture Specification

One of the challenges with designing this specification is that each decision constraints every subsequent decision. A basic architectural specification is crucial for specifying transaction structure and rules, as well as for specifying how queries should be formed for data lookup. We need the architectural specification in order to put forth transaction generation & query rules (since transactions & queries don’t mean much w/o specifying the cells that are involved) yet we need transaction generation & query rules to understand which architectural decisions make the most sense to begin with… This seems circular.

So what I will do is decide on some preliminary architecture, use that preliminary architecture to design transaction rulesets & query operations, and then note which portions of the architecture need to be changed to better support the actual specification.

The first architectural decision - or perhaps architectural feature of CKB’s programming model - is the separation of state change logic and the actual state bound by this logic. For any component of state (i.e., cell or set of cells) that may be subject to change in the future, there must be another component of state that contains the verification logic. Note here that “component of state” is a logical component since it refers to one or more cells.

So, if we have UDT-instances with which people will transact, it follows that we must have some second cell that contains the verification logic for these UDT-instances. I will call this the “UDT-definition”. Since we want to enable updating of the verification logic for extenuating circumstances (such as security vulnerabilities or policy changes that the UDT-user-base approves us), it following that we must have some third cell that contains the verification logic for updating these UDT-definitions which I will call “UDT-Standard”.

Aside about metadata: Above I mentioned that the UDT-definition is not just the verification logic, but all of the metadata that doesn’t belong on UDT-instances as well. The question is: where to store this metadata? Should the metadata get its own cell? Should it be stored as args on the UDT-definition’s script(s): the “UDT-Standard”? Even if the metadata for some UDT and the verification logic are split up into separate cells, these cells are still collectively called the “UDT-Definition”. Therefore, the UDT taxonomy I’m using does not necessarily map 1:1 to actual cells; it is a logical taxonomy rather than a physical taxonomy.

So, the architecture is a hierarchy with at least three levels:

  1. UDT- Instances
  2. UDT-Definitions
  3. UDT-Standard

The standard constrains valid transactions that involve UDT-Definitions and the UDT-Definitions constrain valid transactions that involve their respective UDT-Instances.

Nervos-provided vs Developer-provided UDT-Standard Cell
At the UDT-Standard level, there is 1 UDT-standard to many definitions. At the UDT-Definition level, there is 1 UDT-Definition to many instances. The question is: if developers are responsible for implementing the UDT-Definition, are they also responsible for building the UDT-Standard, or should that be provided by the Nervos Network? Since the goal is to allow for flexible monetary, governance, and issuance policies, there are two ways forward. The first way is to provide a UDT-Standard cell for the developer community and make it configurable enough through args. The second way is to provide a minimum ruleset that the UDT-Standard must enforce and then developers can build the UDT-Standard cell in any way they wish so long as it satisfies the necessary rules. If we end up wishing to store metadata as args passed to the UDT-standard cell, then the order & meaning of the args will need to be standardized so that the metadata can be retrieved by other services. This is easier if the standard cell is provided by Nervos Network, because the args can be enforced programmatically. However, by enabling developers to build their own UDT-Standard cell, they can omit optional functionality rather than configure it to be disabled via certain args.

Imagine the following scenario: the UDT-Standard cell offered by Nervos Network allows minting of new tokens after initial supply to be enabled or disabled w/ a boolean arg, as well as to set additional parameters on minting (such as how much at once, whether there is stakeholder voting, approving certain minters as signatories, etc). They do, however, want upgradeability, so they set upgradeability to true with another arg & include additional configuration. Now, whenever they upgrade the UDT-definition, the UDT-standard cell includes redundant code, thereby increasing the size of the transaction unnecessarily. If they were to build their own UDT-Standard cell, however, they could omit any functionalities irrelevant to their use case. One possible solution is to standardize the args regardless of what code is actually included in their UDT-Standard script. So even if the standard script does not regard the third arg as a “mintable flag”, they still need to include a “false” value there to indicate that it’s not mintable, and their script will have to be written to ignore that flag. This, however, is a “hacky” solution.

Metadata as args on UDT Definition vs metadata cell
Imagine the alternative, though, where metadata - such as total supply - is stored in a metadata cell instead of scripts. Some of this metadata is required for scripts. For example, total supply will be important for a script to have access to in the event of, say, increasing or decreasing the supply (minting or burning). It is easier for total supply, then, to be included as args to the script than stored in a completely different cell.

There are two solutions to this. The first is to create a metadata cell and provide the script - via an arg - with the outpoint. Then transactions must include this outpoint as a dep in the deps field. Scripts would load the metadata in and verify updates or minting that way. The problem with this is that loading in data is more computationally expensive. Luckily, CKB programming supports partial data loading. The second solution is to split up metadata; some stored in args, some stored in a metadata cell. The args would store the metadata that’s crucial to state change logic while other metadata would be stored in the metadata cell. This complicates things, though, because now metadata is stored in two places and some metadata that is relevant to one script’s verification logic may be relevant to other script’s verification logic. For example, totalSupply is important for minting & burning, but burning can be performed by holder as well. Storing metadata in a cell also provides greater extensibility since additional metadata can be stored by developers in a more sophisticated data structure. So, a reference to the cell containing script-relevant data (the metadata cell) can be passed as an arg instead.

What we need to ensure when designing the structure of the metadata cell is that the script relevant fields are serialized so that scripts can partially load data rather than loading the entire contents of the metadata cell since some pieces of metadata will be relevant for some specific script while others will not be. For example, if the data were stored as a map from keys to values, the script would have to load in the entire map in order to load only 1 or 2 of the relevant keys. A list structure would allow partial loading, but we would have to determine how to handle optional data. In a map, a key can simply be omitted. In a list structure, an omitted key would change the byte address of every other metadatum and therefore the entire list would still have to be loaded. One solution is to standardize the byte addresses and their meaning (i.e., first x bits is dedicated to the name), and omission of an optional piece of metadatum is represented with a “null byte sequence”. The tradeoff here is that if a developer wishes to omit a certain piece of data from the metadata cell, they still need to occupy capacity equal to the capacity required to store every piece of metadata, regardless of whether some piece is considered optional.

Hashmaps take up more space to begin with. A hashmap with two key value pairs will take up far more space than an array with two elements. So, in the worst case scenario, when all optional metadata is included, an array-based structure takes up less space and supports partial loading whereas a map does not. We will have to decide which data structure to use whether it be a list or map or some other one. The right serialization method can fix some of these problems with the tradeoff that there will be an added deserialization overhead in this case for both scripts and queriers.

UDT Standard as a Lock & Type Script
The UDT-Standard cell is there to provide verification logic for operations performed on the UDT-Definition. This includes permissions management as well as ensuring valid state changes and data consistency. Valid state changes are meant to be handled by type scripts according to CKB’s programming model, while permissions are handled by lock scripts. Therefore, the UDT-standard is physically composed of two cells - a lock script and type script cell.

This unfortunately complicates things for developers since the more cells they have to store, the more capacity is required for developing a UDT in the first place. However, allowing them to develop their own UDT-Standard cells allows them to remove redundant code, thereby saving capacity. So this is a tradeoff. Probably the best solution is to provide Standard cells for developers who wish to use them (since it saves development time as well) yet developers are free to develop UDT-standard cells of their own that are standard-compliant for the optional operations they want to support so long as the required rulesets are enforced by their custom scripts.

Summary of architecture so far

  1. UDT-Instances
  2. UDT-Definitions — Type Script Cell; Metadata Cell
  3. UDT-Standard — Type Script Cell; Lock Script Cell

Conclusion to part 1

In this post I covered that transactions, not scripts, are the primary programmatic interface for CKB development. I proposed a set of operations and queries that the UDT standard should support, explained why our UDT standard will require specifying a specific architecture for all UDTs developed by the community and then described a very high level architecture for a UDT which was simply the required set of cells that would be necessary to support the required operations and queries.

In the next post, I’ll dive into the transaction rulesets and then, based on those rulesets, describe the UDT architecture in greater depth. For example, if UDT-Definitions are updated, the references that UDT-Instances have for a UDT-Definition should not remain valid and, therefore, the UDT-Standard will have to enforce something similar to the Type ID functionality. As another example, the UDT-Definition should provide forgery protection, yet UDT creation of initial supply must circumvent this. How the UDT creation operation is designed (is it by providing special permissions, using a one time authorization, etc) will affect which args are passed to which scripts, etc.

6 Likes

Great extension and explanation to the UDT standard evaluaion criter post!

为什么这篇文章没有中文译本,它的意义很大。
以太坊的代币有个根本性问题,它仅是一个证明你拥有某种未来权属的符号,它甚至仅是一个融资的符号,等待的是未来项目推出正式链代币,把这些ERC20代替和回收。所以以太坊仿佛是一个被寄生的融资工具,它只流通价值而无法留住价值。另一个严重问题,以太坊仅能帮助开发者融资,开发者无法通过自己开发的项目从以太坊本身获得以太代币,导致开发者无法盈利和养活自己,开发者都在离开以太坊。

我通过这篇CKB的文章,意识到一个CKB和以太坊根本性的区别,可能CKB(nervos)同样可以用来融资,但是它可能不仅是一个权利符号,项目可以直接在CKB的2层链上发布实用通证,而不需要离开CKB的系统。这就导致几个后果:
1,开发者不需要离开CKB系统,另建一个系统
2,开发者都可以留在CKB的系统
3,开发者可能通过在CKB2层创建自己的系统,通过提供服务提供自己的通证,来在系统中兑换或者盈利赚取CKB代币,这就解决了开发者盈利的问题,也解决了开发者养活自己的问题,这将导致根本性的变化。
4,解决了开发者盈利问题,就可能导致大规模的开发者加入DAPP的区块链生态开发者。

最后一句话,Nevos系统,可以超过Ethereum. CKB的价值,可以超越ETH. 我在这里论断,谁能解决开发者盈利的问题,谁将颠覆世界。 写于2019-11-23.
币圈飘飘(2019-11-23):

2 Likes

We will translate this article to Chinese those days, and we will organize an online talk about (Pre) UDT on CKB next week.

If you’re interested about it, welcome to join us. My wechat account: stwith.

(Please note this is English Channel)

1 Like

我同意币圈飘飘的看法
王业伟(2019-12-11)