I’ll admit I don’t know not how nodes work exactly, so was hoping someone can answer a few questions.
The reason I’m asking this is because like the topic says, I was wondering if the Secondary Issuance might allow CKB to do something that I don’t think any other blockchain is able to do, which is reward people to run Nodes.
It wouldn’t necessarily need to be a real POW system, just some random lottery type system would probably do. For example, every certain number of blocks, an eligible Node (100% uptime over the period) would receive a CKB reward.
Some questions I have are:
1 – Is it possible to associate a CKB address to a Node similar to a miner?
2 – Is it detrimental to the network to have ‘too many’ nodes, especially when this would encourage individual people to run multiple nodes, or does this not matter and the more nodes the better?
3 – Is there a technical reason that this is just impossible to implement?
It’s an interesting question, not silly at all. For PoW network the more full nodes the better. I think the main issue would be sybil attack - how to prevent arbitrators kick-off 10k nodes to make gains? Here we cannot rely on pow/pos again otherwise full nodes would just become a new kind of “miners”.
A reasonable solution to me is to combine the full node rewards with DAO, using DAO governance to create a reputable full node lists and appropriate rewards from secondary issuance.
For example, the DAO (either CKB Community DAO 1.0, 2.0, … or the future NervosDAO) can fund a data collecting service CKB Node Ninjas (CNN) first. CNN consists of two sub open-source projects, a client-side daemon running side by side with CKB node to collect node stats, a website collecting node stats from all voluntary CNN clients (e.g. like ethstats). At the same time the DAO also creates a CNN Rewards program, which will pick winners from a subset of CNN nodes periodically, according to some pre-agreed rules and criteria.
Actually the DAO-as-an-intermediate schema can be extended to other things that’s good for the network but difficult to be baked into consensus. It can connect the incentives with network growth in a decentralized and flexible way - it’s much easier to adjust CNN Rewards rules than something hard coded in consensus.
Hi Jan, thanks for the reply and especially for putting forward your idea. That sounds like something we could actually aim for and I think is the sort of thing that would be widely supported by the community DAO.
I would say a p2p storage market + a DAO governed treasury.
Decentralized storage is already technically viable, as demonstrated by Filecoin, Sia, and other erasure-coding-based schemes. The key question is: What should the economic model of the storage market look like? How should storage borrowers pay providers? Is a “storage coin” necessary for transactions? I believe stablecoins and (micro)payment channels like Fiber are better fits than mandating specific “storage coins.” and on-chain payments.
Such a p2p storage market could use peer-to-peer messaging for demand/supply matching, erasure coding for data redundancy, and stablecoin in Fiber channels for payments. Storage users can broadcast demand quotes to find matches, set up Fiber payments once deals are made, upload data to selected providers, and make stablecoin micropayments through Fiber channels when retrieving data. Providers can broadcast supply quotes, accept uploads after setting up Fiber payments, pull funds once uploads are complete, etc.
If such a p2p storage market exists, it can serve as the decentralized data availability layer for Nervos, allowing apps to become more decentralized by using it. It complements CKB by storing “dumb data” rather than programmable “smart data”(stored in CKB cells). It could also be used to store “public goods” dumb data such as historical blocks and transactions (including witnesses). Finally, the costs of maintaining this “public goods” data can be funded by the CKB treasury, ensuring its long-term availability.
For now, I think CKBFS is awesome and usable. As the ecosystem grows, more options will likely emerge, and some people might still prefer witnesses (perhaps because they are DA redundant by then :-)).
Another idea I’m very fond of is combining DHT and Fiber channels to solve the “vampire node” problem in p2p file sharing. Typically, there are more downloading than uploading nodes in DHT; some nodes only download and never upload, which harms network health. Private trackers partially solved this by collecting node statistics through a centralized service, but this can be easily cheated since it’s impossible to verify the actual data transmitted between anonymous nodes.
What if we integrate Fiber micropayments into each data packet transmission in DHT networks? Every download incurs a fee, and every upload earns money, byte by byte. Anonymous nodes can remain anonymous, no KYC no bank account no signup required. “Vampire nodes” are welcome because they must pay real money for “blood” in this fiberized-DHT. Data transmission can be automatically balanced across the network through economic incentives.
Just a question on this point, for Jan or anyone else who wants to answer it.
I understand this is bad for the reward system as it dilutes the rewards and the chance of honest Node runners receiving rewards, but is this actually bad for the network?
Does it matter if there are 100 people running 1000 nodes each VS 100,000 people each running 1 node?
It seems obvious that we would ‘want’ the second option, but both result in 100,000 nodes and I’m just not sure how this affects the network and what the consequences are.
The strength of a decentralized network lies in the diversity of its nodes. A 10,000 x 1 network is more diverse than a 100 x 100 one. Diversity enhances resilience.
In a 100x100 network, a farm of 100 nodes might reside on a single server or within the same cloud region, a single hardware, network, or operational failure could disable all 100 nodes. The entire network would function more like one with ~100 independent nodes rather than 10,000 independent nodes.
A more significant problem is that if an adversary controls a certain number of nodes in a P2P network, they can attack the network in various ways, such as this attack recently discovered by Ren or eclipse attacks. Having as many independent nodes as possible allows honest ones to collectively prevent adversaries from causing harm.
Interesting idea, so long it doesn’t negatively affect honest users/nodes, it could be very positive
It would offer both a revenue for node operators and a use-case for Fiber.
Should also be applied to RPC requests?
If you remember, I also found a non-practical attack when a user queried an untrustworthy RPC provider. Which in the meantime got fixed with a new standard and eventually it’s gonna be fully solved with its adoption.
Idea: say we integrate Fiber micro-payments, we could slash the capital of nodes who return evidently forged data.
Yes, we need a diverse set of nodes, possibly nodes that we can RPC query against? Possibly in a safe way?
My main issue is that there are exactly 2 publicly available full nodes that I can RPC query against in my web DApps.
Another issue is that there is no single configuration for public nodes. All nodes can be configured in an ever so slightly different way, so their capabilities will be different and not interchangeable. This means that creating a pool of public nodes is difficult. This old standardization Feature Request from @jm9k could help.
How do you see this being done? Would each node runner have to do KYC to be approved, or are there other ways we could know if someone is running multiple nodes? I’m sure you could do this with IP addresses, but I’m assuming that would be easy to get around.
KYC is at the other extreme, I think there’re many middle points on the specturm. For example, nodes can be asked to:
bind with a IP address, could be a good start as it’s limited and not free.
bind with a social account and piggyback on existing identity services.
bind with a Nervos Talk account, and the account must have X posts in past Y months.
bind with a NervosDAO address, with CommunityDAO voting history.
bind with a .bit or did:web5 handle with activities in web5 apps
bind with a web of trust trustable id, of course a wot must be built first on CKB
Online reputation is a scarce resource that can be used for sybil-resistant and assessed in various of ways. A DAO could even support multiple competing schemes simultaneously to see which one works best.
There’s no silver bullet. No matter how perfect a scheme seems initially, hackers will find a way to break it, the the scheme will upgrade, then hackers find another way, then … Action and maintenance are more important than the initial design’s perfection.
Ye that would be an interesting use case for Fiber.
I don’t think RPC services are the same as full nodes, they should be incentivized differently. Running a full node means using my own node, which aligns best with the network’s principles. Relying on a remote RPC without local verification contradicts “don’t trust, verify” and encourages centralized services like Infura. I’m not saying Infuras are bad—they enhance developer experiences—but these services should operate as independent businesses rather than being continuously supported by the network treasury.
For power users, the best option is to fetch from your own full node. For average users, the decision lies with dapp developers. Dapps should always consider run their own exclusive full nodes for optimal user experience (your requests won’t compete with others), and developers are naturally power users. As the number of RPC requests grows, scalable middleware becomes necessary, presenting an opportunity to build either an open-source project or a proprietary one that could become a business in the future. Medium/small dapps might delegate this work to businesses like Infura with service quality guaranteed by SLA or use free but unstable public nodes.
Agreed, that said, I’m not comfortable running a public server for this purpose. And I bet a lot of other independent devs feels the same.
It boils down to another barrier to entry and we should consider it more carefully.
That’s why I’d like to rely on a pool of them and cross-check the results.
Additionally, I’m toying with the idea of injecting these results in the Light Client DB before its bootstrap time. In turn, Light Client will verify this data when bootstrapping and updates it as new blocks come in. If this happens to work, we just need to send 10 MB of Light Client WASM payload to the Web DApp.