The proposal team sincerely thanks the people who have participated in this one-month testing period. Five people have provided code-related feedback: @phroi, @woodbury.bit, and three other community members who formed a testing group in Discord.
The community testing group is still working on UI/UX testing and constantly produces valuable reports; the dev team is also fixing them as soon as they can.
Bug Fixing Table
Problems
Description
Status
Notes
1
voter list-related issues
Design retained with documentation
The current decision is still to keep the existing design. With an audit tool developed by the community, the results would be easily verifiable. Below is the system design doc: Vote System
2
Unverified indexer trust
Solution proposed
An alternative solution has been proposed in the last section of this post - “Regarding Centralization”.
3
Sybil weight amplification
Fixed
4
Unauthenticated update_state
Fixed
5
DID-level proposal whitelist
Fixed
6
Stale whitelist via timing
Fixed
7
Unauthenticated build-whitelist
Fixed
8
Selective proof denial
Fixed
9
Premature vote termination
Fixed
10
Voter enumeration via proof endpoint
Fixed
11
Browser-only key storage
Design Retained
Explained in detail below this table.
12
8-minute voting window
Fixed
13
Whitelist ID format mismatch
Fixed
14
Hardcoded testnet
Fixed
15
DAO deposit detection failures
Fixed
16
Wrong hashes in frontend config
Fixed
17
Proposal write and read using different data routes, causing inconsistent data presentation
Fixed
18
Failed indexer call
Fixed
19
Memory Leak via Box::leak() in error handling
Fixing
20
Multiple unwrap() panics on user-influenced data in the scheduler
Fixing
21
Dockerfile CMD does not expand shell variables
Fixing
22
Database credentials logged at startup
Fixing
23
Permissive CORS policy allows all origins
Fixing
24
Delete operations use fetch_one instead of execute
Fixing
25
N+1 query patterns in multiple API endpoints and schedulers
Fixing
26
Missing database indexes on frequently queried columns
Fixing
27
Hardcoded CKB contract code hashes
Fixing
28
No rate limiting or request body size limits on API endpoints
Fixing
29
Missing pagination limit caps allow full table dumps
Fixing
30
HTTP client created per request instead of reused
Fixing
Bugs from the numbers 18 to 30 are reported by community testers, and they are logged and tracked through GitHub issues
Related Posts
Below are links to various posts on the ongoing DAO v1.1 debate. This is another attempt to group together increasingly fractionalized communication so the community can know what has happened:
The debate is still happening around the whitelist design; there’s no set date yet for the official delivery of M2. The obvious next step is to fix the bugs revealed by the community first, but as for how the proposal can move forward, the proposal team is willing to work with anyone who sincerely wants to push it forward.
Attention
The proposal team has also noticed some unwanted behavior towards the steward team. They encourage everyone in the community to keep the conversation respectful, focus on solving problems, and make valuable things happen. They also want to express their sincere thanks to the steward team: @_magicsheep, @nightlantern, and @zz_tovarishch for their work during this bumpy time.
I apologize for encountering some strange formatting issues on the original device. I have now switched to a different device and need to re-edit the existing post.
I do not deny that there are many centralized services in the DAO 1.1 system, but it has subtle yet essential differences from DAO 1.0 (Metaforo).
Taking the duplicate voting issue encountered during the DAO 1.1 proposal voting process as an example:
In the Metaforo crisis, if someone had not discovered the duplicate voting problem based on other clues, and the foundation had not retrieved the voting data through the Metaforo backend, this issue could easily have been quietly covered up.
However, in DAO 1.1, any community member can independently detect that something was wrong with this vote. No additional clues are needed, no backend administrative privileges are required—only on-chain data is relied upon.
For easier understanding, you may refer to the concept of Optimistic Rollup. The centralized servers in DAO 1.1 function similarly to a kind of “fast path,” and users can fall back to the “slow path” at any time.
For the specific scenario you mentioned, users can construct the voter list from on-chain data and generate the SMT proof they need themselves, then directly submit the voting transaction—completely without relying on DAO 1.1’s centralized server.
This tool is not located in the DAO 1.1 frontend, but rather in the community’s independent audit tools. Independent audit tools must possess independence, so we hope to have them community-led in development. The coordination has not yet been completed, so development has not started. However, during the testing of the aforementioned independent audit solution, we developed an internal version. If needed, we can open-source it as a reference implementation.
I’m well familiar with those links: they provide a high level overview of whitelist lifecycle, but they don’t even contain nearly enough details to implent an independent audit tool. Also, they do not currently describe this well known attack and its solution.
Forgetting for a second the documentation issue:
Does it also mean that a user can vote from any interface he prefers?
Does this imply that users will need to develop their own full stack to vote in that scenario?
For reference: @david-fi5box team seems to be assuming that, to work around this issue (created by the inclusion of whitelist into the design), a user needs to develop a stack that costed already more than 100k USD to develop. All this also assumes the user has such expertise.
One question, though: why is a project passing your audits deemed “good enough to publish”?
Community audits are obviously welcome, but under what circumstances can we change a proposal’s milestone delivery from “pending” to “accepted”? This is not only about DAO v1.1, but it’s also crucial for future proposals. In the future, can I also ask other proposers to pass my audits before their milestones are deemed acceptable?
If there’s a role assigned to you explicitly or implicitly to do this, then I guess it would be better for you to publicly state it. Otherwise, this situation actually confirms my concerns expressed in this post, “who represents the community?” and “how to verify milestone?”.
BTW, it’s actually quite good to see that there are finally some semi-rules determining the acceptance of a milestone; I’m just concerned that these rules are set arbitrarily.
We really seem to be getting bogged down here lately.
The name of the project is DAO 1.1, which to me means an incremental step forward, not perfection from the get go.
I don’t think eliminating voting exploits was even a part of the original proposal, because it was only during the vote for DAO 1.1 that we had some type of exploit (that we knew about anyway).
Obviously we all knew Metaforo wasn’t ideal, but I thought that the big step forward in that regard was that the new platform would be community owned and managed, so that any issues could be dealt with quickly and transparently, it wasn’t to perfect DAO voting systems.
So as long as the potential exploits are known and there’s enough people here with the ability to understand them and be on the lookout, then I think this is good enough for the team to move forward with their main goals of increasing participation and improving the experience of both project proposers and community voters.
PS: @phroi I’m not having a crack at you here, I personally really appreciate you being here to bring these issues to light, but I also think there comes a time when ‘close enough is good enough’ and we need to keep moving forward.
The tool you just revealed could finally address (part of) 1. Mentioning its existance without releasing leaves 1 still unaddressed.
So to reply:
You don’t need to pass my audit and you could choose a less strict auditor.
On my side, I will release a public audit of such tool once you open-source it.
Keeping the tool closed-source does not help DAO v1.1 positioning.
Allowing a Fair Vote is the basic of Voting Systems.
Not perfection defined as: attacker being able to take control of crucial votes like Meta Rule changes and Stewards elections. See: DAO v1.1 Public Testing Report - #2 by phroi
In good consience, I need to take every possible step to make sure that this doesn’t happen.
My deep thanks to you for your efforts and what you do. We may not reach perfection, but we do not want to accept errors that can be avoided. Thank you once again. Best regards.
As an ordinary community member, I explicitly authorize and entrust @Phroi to act on my behalf in reviewing, questioning, and auditing DAO V1.1 and its related verification tools.
This statement represents my personal position only, not that of the entire community. But it also means that @Phroi is not speaking for “no one” — he is speaking, at minimum, for community members like me who want independent review and technical scrutiny before acceptance.
This authorization remains valid until I publicly state otherwise.
Woodbury.bit
声明:
作为社区普通成员,我在此明确表示:就 DAO V1.1 及其相关验证工具的审计、审阅与技术评估事项,我个人认可并支持 @Phroi 代表我提出问题、进行质询、审阅相关设计与实现。
Taking a step back, I cannot require V1.1 Team to solve the issues (created by chosing this implementaion), as this is not something an auditor can decide. I can point vulnerabilities, errors and gaps.
As Yeti pointed out, we must consider probabilities. Most of the time, voting works fine. Providing a fast path for likely scenarios and a slow path for edge cases is a reasonable and pragmatic decision.
Yes, this is indeed the first time I’ve explicitly proposed this tool. But let me clarify my earlier remarks. I called it an “internal version” and didn’t release it initially because it heavily reuses code from the existing DAO 1.1 centralized services.
Your understanding is correct—in the worst case, users are essentially running the DAO 1.1 service themselves.
I mentioned this in my earlier discussion with woodbury.bit: I’m unsure whether an internal audit tool can meet the community’s independence requirements. If people question the services built by the current dev team, wouldn’t they question an auditing tool built by the same team? That’s the coordination issue I referred to earlier.
Let me give you an overview of DAO 1.1’s current status. The project is stalled on two fronts.
First, the audit tool independence issue I just mentioned—we haven’t even started building it yet.
Second, Matt and Phroi are questioning whether we should pursue a full on-chain solution. That discussion is stuck on conflicting interpretations of the meta-rules, and I’m waiting for a final decision.
So this release covers only the core DAO 1.1 implementation—no standalone audit tool, no alternative solutions.
So if anyone has suggestions on these two issues, I propose opening a separate thread for discussion.
On the second issue, I think I’ve said everything I can say.
But on the first issue, we’re still pushing forward. We welcome anyone interested in discussing this or offering technical input.