P
P
PQ Reviews
Search…
0.7
Finished Reviews
Retired
Powered By GitBook
Serum Process Quality Review

Overview

This is a Serum Process Quality Review completed on 27/09/2021. It was performed using the Process Review process (version 0.7.3) and is documented here. The review was performed by Nick of DeFiSafety. Check out our Telegram.
The final score of the review is 23%, a FAIL. The breakdown of the scoring is in Scoring Appendix. For our purposes, a pass is 70%.

Summary of the Process

Very simply, the review looks for the following declarations from the developer's site. With these declarations, it is reasonable to trust the smart contracts.
    Here are my smart contracts on the blockchain
    Here is the documentation that explains what my smart contracts do
    Here are the tests I ran to verify my smart contract
    Here are the audit(s) performed on my code by third party experts
    Here are the admin controls and strategies

Disclaimer

This report is for informational purposes only and does not constitute investment advice of any kind, nor does it constitute an offer to provide investment advisory or other services. Nothing in this report shall be considered a solicitation or offer to buy or sell any security, token, future, option or other financial instrument or to offer or provide any investment advice or service to any person in any jurisdiction. Nothing contained in this report constitutes investment advice or offers any opinion with respect to the suitability of any security, and the views expressed in this report should not be taken as advice to buy, sell or hold any security. The information in this report should not be relied upon for the purpose of investing. In preparing the information contained in this report, we have not taken into account the investment needs, objectives and financial circumstances of any particular investor. This information has no regard to the specific investment objectives, financial situation and particular needs of any specific recipient of this information and investments discussed may not be suitable for all investors.
Any views expressed in this report by us were prepared based upon the information available to us at the time such views were written. The views expressed within this report are limited to DeFiSafety and the author and do not reflect those of any additional or third party and are strictly based upon DeFiSafety, its authors, interpretations and evaluation of relevant data. Changed or additional information could cause such views to change. All information is subject to possible correction. Information may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.
This completed report is copyright (c) DeFiSafety 2021. Permission is given to copy in whole, retaining this copyright label.

Chain

This section indicates the blockchain used by this protocol.
Chain: Solana
Guidance: Ethereum Binance Smart Chain Polygon Avalanche Terra Celo Arbitrum Solana

Code and Team

This section looks at the code deployed on the Mainnet that gets reviewed and its corresponding software repository. The document explaining these questions is here. This review will answer the following questions:
1) Are the executing code addresses readily available? (%) 2) Is the code actively being used? (%) 3) Is there a public software repository? (Y/N) 4) Is there a development history visible? (%) 5) Is the team public (not anonymous)? (Y/N)

1) Are the executing code addresses readily available? (%)

Answer: 100%
They are available at website https://github.com/project-serum/serum-dex/blob/master/README.md, as indicated in the Appendix.
Guidance: 100% Clearly labelled and on website, docs or repo, quick to find 70% Clearly labelled and on website, docs or repo but takes a bit of looking 40% Addresses in mainnet.json, in discord or sub graph, etc 20% Address found but labeling not clear or easy to find 0% Executing addresses could not be found

2) Is the code actively being used? (%)

Answer: 100%
Activity is 20000+ transactions a day on contract Serum DEX V3, as indicated in the Appendix.

Guidance:

100% More than 10 transactions a day 70% More than 10 transactions a week 40% More than 10 transactions a month 10% Less than 10 transactions a month 0% No activity

3) Is there a public software repository? (Y/N)

Answer: Yes
Is there a public software repository with the code at a minimum, but also normally test and scripts. Even if the repository was created just to hold the files and has just 1 transaction, it gets a "Yes". For teams with private repositories, this answer is "No".

4) Is there a development history visible? (%)

Answer: 100%
There are 147 commits and 12 branches, making Serum's development history strong and steady.
This metric checks if the software repository demonstrates a strong steady history. This is normally demonstrated by commits, branches and releases in a software repository. A healthy history demonstrates a history of more than a month (at a minimum).
Guidance: 100% Any one of 100+ commits, 10+branches 70% Any one of 70+ commits, 7+branches 50% Any one of 50+ commits, 5+branches 30% Any one of 30+ commits, 3+branches 0% Less than 2 branches or less than 30 commits

5) Is the team public (not anonymous)? (Y/N)

Answer: No
The Serum team is anonymous.
For a "Yes" in this question, the real names of some team members must be public on the website or other documentation (LinkedIn, etc). If the team is anonymous, then this question is a "No".

Documentation

This section looks at the software documentation. The document explaining these questions is here.
Required questions are;
6) Is there a whitepaper? (Y/N) 7) Are the basic software functions documented? (Y/N) 8) Does the software function documentation fully (100%) cover the deployed contracts? (%) 9) Are there sufficiently detailed comments for all functions within the deployed contract code (%) 10) Is it possible to trace from software documentation to the implementation in code (%)

6) Is there a whitepaper? (Y/N)

Answer: Yes

7) Are the basic software functions documented? (Y/N)

Answer: No
The documents do not cover detail the software functions.

How to improve this score:

Write the document based on the deployed code. For guidance, refer to the SecurEth System Description Document.

8) Does the software function documentation fully (100%) cover the deployed contracts? (%)

Answer: 0%
Only one deployed contract was found, and there was no reference documentation for it.
Guidance:
100% All contracts and functions documented 80% Only the major functions documented 79-1% Estimate of the level of software documentation 0% No software documentation

How to improve this score:

This score can be improved by adding content to the software functions document such that it comprehensively covers the requirements. For guidance, refer to the SecurEth System Description Document. Using tools that aid traceability detection will help.

9) Are there sufficiently detailed comments for all functions within the deployed contract code (%)

Answer: 0%
Code examples are in the Appendix. As per the SLOC, there is 4% commenting to code (CtC).
The Comments to Code (CtC) ratio is the primary metric for this score.
Guidance: 100% CtC > 100 Useful comments consistently on all code 90-70% CtC > 70 Useful comment on most code 60-20% CtC > 20 Some useful commenting 0% CtC < 20 No useful commenting

How to improve this score

This score can improve by adding comments to the deployed code such that it comprehensively covers the code. For guidance, refer to the SecurEth Software Requirements.

10) Is it possible to trace from software documentation to the implementation in code (%)

Answer: 0%
There is no explanation of the software functions in the documentation. All that can be found is one contract address that is running huge numbers of transactions with different purposes.
Guidance: 100% Clear explicit traceability between code and documentation at a requirement level for all code 60% Clear association between code and documents via non explicit traceability 40% Documentation lists all the functions and describes their functions 0% No connection between documentation and code

How to improve this score:

This score can improve by adding traceability from documentation to code such that it is clear where each outlined function is coded in the source code. For reference, check the SecurEth guidelines on traceability.

Testing

This section looks at the software testing available. It is explained in this document. This section answers the following questions;
11) Full test suite (Covers all the deployed code) (%) 12) Code coverage (Covers all the deployed lines of code, or explains misses) (%) 13) Scripts and instructions to run the tests (Y/N) 14) Report of the results (%) 15) Formal Verification test done (%) 16) Stress Testing environment (%)

11) Is there a Full test suite? (%)

Answer: 0%
No evidence of testing could be found.
Code examples are in the Appendix.
This score is guided by the Test to Code ratio (TtC). Generally a good test to code ratio is over 100%. However the reviewers best judgement is the final deciding factor.
Guidance: 100% TtC > 120% Both unit and system test visible 80% TtC > 80% Both unit and system test visible 40% TtC < 80% Some tests visible 0% No tests obvious

How to improve this score:

This score can improved by adding tests to fully cover the code. Document what is covered by traceability or test results in the software repository.

12) Code coverage (Covers all the deployed lines of code, or explains misses) (%)

Answer: 0%
No code coverage was found.
Guidance: 100% Documented full coverage 99-51% Value of test coverage from documented results 50% No indication of code coverage but clearly there is a reasonably complete set of tests 30% Some tests evident but not complete 0% No test for coverage seen

How to improve this score:

This score can improved by adding tests that achieve full code coverage. A clear report and scripts in the software repository will guarantee a high score.

13) Scripts and instructions to run the tests (Y/N)

Answer: Yes

14) Report of the results (%)

Answer: 0%
No test report was found
Guidance: 100% Detailed test report as described below 70% GitHub code coverage report visible 0% No test report evident

How to improve this score

Add a report with the results. The test scripts should generate the report or elements of it.

15) Formal Verification test done (%)

Answer: 0%
No formal verification was identified.

16) Stress Testing environment (%)

Answer: 100%
Serum is live on Solana Devnet.

Security

This section looks at the 3rd party software audits done. It is explained in this document. This section answers the following questions;
17) Did 3rd Party audits take place? (%) 18) Is the bounty value acceptably high?

17) Did 3rd Party audits take place? (%)

Answer: 0%
Serum is unaudited.
Guidance: 100% Multiple Audits performed before deployment and results public and implemented or not required 90% Single audit performed before deployment and results public and implemented or not required 70% Audit(s) performed after deployment and no changes required. Audit report is public
50% Audit(s) performed after deployment and changes needed but not implemented 20% No audit performed 0% Audit Performed after deployment, existence is public, report is not public and no improvements deployed OR smart contract address' not found, (where question 1 is 0%)
Deduct 25% if code is in a private repo and no note from auditors that audit is applicable to deployed code

18) Is the bounty value acceptably high (%)

Answer: 0%
No bug bounty information could be found.
Guidance:
100% Bounty is 10% TVL or at least $1M AND active program (see below) 90% Bounty is 5% TVL or at least 500k AND active program 80% Bounty is 5% TVL or at least 500k 70% Bounty is 100k or over AND active program 60% Bounty is 100k or over 50% Bounty is 50k or over AND active program 40% Bounty is 50k or over 20% Bug bounty program bounty is less than 50k 0% No bug bounty program offered
An active program means that a third party (such as Immunefi) is actively driving hackers to the site. An inactive program would be static mentions on the docs.

Access Controls

This section covers the documentation of special access controls for a DeFi protocol. The admin access controls are the contracts that allow updating contracts or coefficients in the protocol. Since these contracts can allow the protocol admins to "change the rules", complete disclosure of capabilities is vital for user's transparency. It is explained in this document. The questions this section asks are as follow;
19) Can a user clearly and quickly find the status of the admin controls? 20) Is the information clear and complete? 21) Is the information in non-technical terms that pertain to the investments? 22) Is there Pause Control documentation including records of tests?

19) Can a user clearly and quickly find the status of the access controls (%)

Answer: 100%
Access controls are identified under multisig in the docs.
Guidance: 100% Clearly labelled and on website, docs or repo, quick to find 70% Clearly labelled and on website, docs or repo but takes a bit of looking 40% Access control docs in multiple places and not well labelled 20% Access control docs in multiple places and not labelled 0% Admin Control information could not be found

20) Is the information clear and complete (%)

Answer: 30%
a) All contracts are clearly labelled as upgradeable (or not) -- 0% --- no details on which contracts are identified b) The type of ownership is clearly indicated (OnlyOwner / MultiSig / Defined Roles) -- 30% -- ownership is clearly indicated c) The capabilities for change in the contracts are described -- 0% -- no capacity for change beyond "upgrades" is identified.
Guidance: All the contracts are immutable -- 100% OR
a) All contracts are clearly labelled as upgradeable (or not) -- 30% AND b) The type of ownership is clearly indicated (OnlyOwner / MultiSig / Defined Roles) -- 30% AND c) The capabilities for change in the contracts are described -- 30%

How to improve this score:

Create a document that covers the items described above. An example is enclosed.

21) Is the information in non-technical terms that pertain to the investments (%)

Answer: 90%
The information is well explained in clear language.
Guidance: 100% All the contracts are immutable 90% Description relates to investments safety and updates in clear, complete non-software l language 30% Description all in software specific language 0% No admin control information could not be found

How to improve this score:

Create a document that covers the items described above in plain language that investors can understand. An example is enclosed.

22) Is there Pause Control documentation including records of tests (%)

Answer: 0%
No pause control is mentioned.
Guidance: 100% All the contracts are immutable or no pause control needed and this is explained OR 100% Pause control(s) are clearly documented and there is records of at least one test within 3 months 80% Pause control(s) explained clearly but no evidence of regular tests 40% Pause controls mentioned with no detail on capability or tests 0% Pause control not documented or explained

How to improve this score:

Create a document that covers the items described above in plain language that investors can understand. An example is enclosed.

Appendices

Author Details

The author of this review is Rex of DeFi Safety.
Email : [email protected] Twitter : @defisafety
I started with Ethereum just before the DAO and that was a wonderful education. It showed the importance of code quality. The second Parity hack also showed the importance of good process. Here my aviation background offers some value. Aerospace knows how to make reliable code using quality processes.
I was coaxed to go to EthDenver 2018 and there I started SecuEth.org with Bryant and Roman. We created guidelines on good processes for blockchain code development. We got EthFoundation funding to assist in their development.
Process Quality Reviews are an extension of the SecurEth guidelines that will further increase the quality processes in Solidity and Vyper development.
DeFiSafety is my full time gig and we are working on funding vehicles for a permanent staff.

Scoring Appendix

Executing Code Appendix

Code Used Appendix

Example Code Appendix

1
};
2
use arrayref::{array_refs, mut_array_refs};
3
use bytemuck::{cast, cast_mut, cast_ref, cast_slice, cast_slice_mut, Pod, Zeroable};
4
5
use num_enum::{IntoPrimitive, TryFromPrimitive};
6
use static_assertions::const_assert_eq;
7
use std::{
8
convert::{identity, TryFrom},
9
mem::{align_of, size_of},
10
num::NonZeroU64,
11
};
12
13
pub type NodeHandle = u32;
14
15
#[derive(IntoPrimitive, TryFromPrimitive)]
16
#[repr(u32)]
17
enum NodeTag {
18
Uninitialized = 0,
19
InnerNode = 1,
20
LeafNode = 2,
21
FreeNode = 3,
22
LastFreeNode = 4,
23
}
24
25
#[derive(Copy, Clone)]
26
#[repr(packed)]
27
#[allow(dead_code)]
28
struct InnerNode {
29
tag: u32,
30
prefix_len: u32,
31
key: u128,
32
children: [u32; 2],
33
_padding: [u64; 5],
34
}
35
unsafe impl Zeroable for InnerNode {}
36
unsafe impl Pod for InnerNode {}
37
38
impl InnerNode {
39
fn walk_down(&self, search_key: u128) -> (NodeHandle, bool) {
40
let crit_bit_mask = (1u128 << 127) >> self.prefix_len;
41
let crit_bit = (search_key & crit_bit_mask) != 0;
42
(self.children[crit_bit as usize], crit_bit)
43
}
44
}
45
46
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
47
#[repr(packed)]
48
pub struct LeafNode {
49
tag: u32,
50
owner_slot: u8,
51
fee_tier: u8,
52
padding: [u8; 2],
53
key: u128,
54
owner: [u64; 4],
55
quantity: u64,
56
client_order_id: u64,
57
}
58
unsafe impl Zeroable for LeafNode {}
59
unsafe impl Pod for LeafNode {}
60
61
impl LeafNode {
62
#[inline]
63
pub fn new(
64
owner_slot: u8,
65
key: u128,
66
owner: [u64; 4],
67
quantity: u64,
68
fee_tier: FeeTier,
69
client_order_id: u64,
70
) -> Self {
71
LeafNode {
72
tag: NodeTag::LeafNode.into(),
73
owner_slot,
74
fee_tier: fee_tier.into(),
75
padding: [0; 2],
76
key,
77
owner,
78
quantity,
79
client_order_id,
80
}
81
}
82
83
#[inline]
84
pub fn fee_tier(&self) -> FeeTier {
85
FeeTier::try_from_primitive(self.fee_tier).unwrap()
86
}
87
88
#[inline]
89
pub fn price(&self) -> NonZeroU64 {
90
NonZeroU64::new((self.key >> 64) as u64).unwrap()
91
}
92
93
#[inline]
94
pub fn order_id(&self) -> u128 {
95
self.key
96
}
97
98
#[inline]
99
pub fn quantity(&self) -> u64 {
100
self.quantity
101
}
102
103
#[inline]
104
pub fn set_quantity(&mut self, quantity: u64) {
105
self.quantity = quantity;
106
}
107
108
#[inline]
109
pub fn owner(&self) -> [u64; 4] {
110
self.owner
111
}
112
113
#[inline]
114
pub fn owner_slot(&self) -> u8 {
115
self.owner_slot
116
}
117
118
#[inline]
119
pub fn client_order_id(&self) -> u64 {
120
self.client_order_id
121
}
122
}
123
124
#[derive(Copy, Clone)]
125
#[repr(packed)]
126
#[allow(dead_code)]
127
struct FreeNode {
128
tag: u32,
129
next: u32,
130
_padding: [u64; 8],
131
}
132
unsafe impl Zeroable for FreeNode {}
133
unsafe impl Pod for FreeNode {}
134
135
const fn _const_max(a: usize, b: usize) -> usize {
136
let gt = (a > b) as usize;
137
gt * a + (1 - gt) * b
138
}
139
140
const _INNER_NODE_SIZE: usize = size_of::<InnerNode>();
141
const _LEAF_NODE_SIZE: usize = size_of::<LeafNode>();
142
const _FREE_NODE_SIZE: usize = size_of::<FreeNode>();
143
const _NODE_SIZE: usize = 72;
144
145
const _INNER_NODE_ALIGN: usize = align_of::<InnerNode>();
146
const _LEAF_NODE_ALIGN: usize = align_of::<LeafNode>();
147
const _FREE_NODE_ALIGN: usize = align_of::<FreeNode>();
148
const _NODE_ALIGN: usize = 1;
149
150
const_assert_eq!(_NODE_SIZE, _INNER_NODE_SIZE);
151
const_assert_eq!(_NODE_SIZE, _LEAF_NODE_SIZE);
152
const_assert_eq!(_NODE_SIZE, _FREE_NODE_SIZE);
153
154
const_assert_eq!(_NODE_ALIGN, _INNER_NODE_ALIGN);
155
const_assert_eq!(_NODE_ALIGN, _LEAF_NODE_ALIGN);
156
const_assert_eq!(_NODE_ALIGN, _FREE_NODE_ALIGN);
157
158
#[derive(Copy, Clone)]
159
#[repr(packed)]
160
#[allow(dead_code)]
161
pub struct AnyNode {
162
tag: u32,
163
data: [u32; 17],
164
}
165
unsafe impl Zeroable for AnyNode {}
166
unsafe impl Pod for AnyNode {}
167
168
enum NodeRef<'a> {
169
Inner(&'a InnerNode),
170
Leaf(&'a LeafNode),
171
}
172
173
enum NodeRefMut<'a> {
174
Inner(&'a mut InnerNode),
175
Leaf(&'a mut LeafNode),
176
}
177
178
impl AnyNode {
179
fn key(&self) -> Option<u128> {
180
match self.case()? {
181
NodeRef::Inner(inner) => Some(inner.key),
182
NodeRef::Leaf(leaf) => Some(leaf.key),
183
}
184
}
185
186
#[cfg(test)]
187
fn prefix_len(&self) -> u32 {
188
match self.case().unwrap() {
189
NodeRef::Inner(&InnerNode { prefix_len, .. }) => prefix_len,
190
NodeRef::Leaf(_) => 128,
191
}
192
}
193
194
fn children(&self) -> Option<[u32; 2]> {
195
match self.case().unwrap() {
196
NodeRef::Inner(&InnerNode { children, .. }) => Some(children),
197
NodeRef::Leaf(_) => None,
198
}
199
}
200
201
fn case(&self) -> Option<NodeRef> {
202
match NodeTag::try_from(self.tag) {
203
Ok(NodeTag::InnerNode) => Some(NodeRef::Inner(cast_ref(self))),
204
Ok(NodeTag::LeafNode) => Some(NodeRef::Leaf(cast_ref(self))),
205
_ => None,
206
}
207
}
208
209
fn case_mut(&mut self) -> Option<NodeRefMut> {
210
match NodeTag::try_from(self.tag) {
211
Ok(NodeTag::InnerNode) => Some(NodeRefMut::Inner(cast_mut(self))),
212
Ok(NodeTag::LeafNode) => Some(NodeRefMut::Leaf(cast_mut(self))),
213
_ => None,
214
}
215
}
216
217
#[inline]
218
pub fn as_leaf(&self) -> Option<&LeafNode> {
219
match self.case() {
220
Some(NodeRef::Leaf(leaf_ref)) => Some(leaf_ref),
221
_ => None,
222
}
223
}
Copied!

SLOC Appendix

Rust Contracts

Language
Files
Lines
Blanks
Comments
Code
Complexity
Rust
9
7282
731
244
6307
317
Comments to Code 244/6307 = 4%

Javascript Tests

Language
Files
Lines
Blanks
Comments
Code
Complexity
JavaScript
N/A
N/A
N/A
N/A
N/A
N/A
Tests to Code / = %
Last modified 24d ago