InstaDApp Process Quality Review

Score : 58%

This is an InstaDApp Process Quality Audit completed on 24 August 2020, revised on 4 September with the 2nd audit found and checked. It was performed using the Process Audit process (version 0.5) and is documented here. The audit was performed by ShinkaRex of Caliburn Consulting. Check out our Telegram.

The final score of the audit is 58%, a pass. The breakdown of the scoring is in Scoring Appendix.

Summary of the Process

Very simply, the audit looks for the following declarations from the developer's site. With these declarations, it is reasonable to trust the smart contracts.

  1. Here is my smart contract on the blockchain

  2. You can see it matches a software repository used to develop the code

  3. Here is the documentation that explains what my smart contract does

  4. Here are the tests I ran to verify my smart contract

  5. Here are the audit(s) performed to review my code by third party experts

Disclaimer

This report is for informational purposes only and does not constitute investment advice of any kind, nor does it constitute an offer to provide investment advisory or other services. Nothing in this report shall be considered a solicitation or offer to buy or sell any security, future, option or other financial instrument or to offer or provide any investment advice or service to any person in any jurisdiction. Nothing contained in this report constitutes investment advice or offers any opinion with respect to the suitability of any security, and the views expressed in this report should not be taken as advice to buy, sell or hold any security. The information in this report should not be relied upon for the purpose of investing. In preparing the information contained in this report, we have not taken into account the investment needs, objectives and financial circumstances of any particular investor. This information has no regard to the specific investment objectives, financial situation and particular needs of any specific recipient of this information and investments discussed may not be suitable for all investors.

Any views expressed in this report by us were prepared based upon the information available to us at the time such views were written. Changed or additional information could cause such views to change. All information is subject to possible correction. Information may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.

Executing Code Verification

This section looks at the code deployed on the Mainnet that gets audited and its corresponding software repository. The document explaining these questions is here. This audit will answer the questions;

  1. Is the executing code address(s) readily available? (Y/N)

  2. Is the code actively being used? (%)

  3. Are the Contract(s) Verified/Verifiable? (Y/N)

  4. Does the code match a tagged version in the code hosting platform? (%)

  5. Is the software repository healthy? (%)

Is the executing code address(s) readily available? (Y/N)

Answer: No

This code was quite difficult to find. In the docs connector and resolvers there were Etherscan links to the contracts. Within the contracts were the hard coded addresses to other contracts. Therefore after significant, non obvious effort almost all contracts were found.

How to improve this score

Make the ethereum addresses of the smart contract utilized by your application available on either your website or your github (in the README for instance). Ensure the address is up to date. This is a very important question wrt to the final score.

Is the code actively being used? (%)

Answer: 100%

Some contracts, such as events were used regularly.

Percentage Score Guidance

100% More than 10 transactions a day 70% More than 10 transactions a week 40% More than 10 transactions a month 10% Less than 10 transactions a month 0% No activity

Are the Contract(s) Verified/Verifiable? (Y/N)

Answer: Yes

0x2af7ea6Cb911035f3eb1ED895Cb6692C39ecbA97, 0xaeCfA2c0f4bAD0Ecee46dcd1250cd0334fE28BC0, 0xB3242e09C8E5cE6E14296b3d3AbC4C6965F49b98 were all verified contracts.

How to improve this score

Ensure that the deployed code is verified as described in this article for Etherscan or ETHPM. Improving this score may require redeployment.

Does the code match a tagged version on a code hosting platform? (%)

Answer: 30%

The final deployed code had hard coded addresses which none of the github releases had. In addition auth.sol had multiple issues with the deployed contract.

Guidance:

100% Code matches and Repository was clearly labelled 60 % Code matches but no labelled repository. Repository was found manually 30% Code does match perfectly and repository was found manually 0% Matching Code could not be found

GitHub address : https://github.com/InstaDApp/dsa-contacts

Deployed contracts in the following file;

Matching Repository: https://github.com/InstaDApp/dsa-contracts/tree/updated-docs-and-readme

How to improve this score

Ensure there is a clearly labelled repository holding all the contracts, documentation and tests for the deployed code. Ensure an appropriately labeled tag exists corresponding to deployment dates. Release tags are clearly communicated.

Is development software repository healthy? (%)

Answer: 100%

With 169 commits and 7 branches, this is a healthy GitHub.

Documentation

This section looks at the software documentation. The document explaining these questions is here.

Required questions are;

  1. Is there a whitepaper? (Y/N)

  2. Are the basic application requirements documented? (Y/N)

  3. Do the requirements fully (100%) cover the deployed contracts? (%)

  4. Are there sufficiently detailed comments for all functions within the deployed contract code (%)

  5. Is it possible to trace software requirements to the implementation in code (%)

Is there a whitepaper? (Y/N)

Answer: Yes

Location: https://blog.instadapp.io/defi-smart-accounts/ It is not called a whitepaper but the content fully covers the intent.

How to improve this score

Ensure the white paper is available for download from your website or at least the software repository. Ideally update the whitepaper to meet the capabilities of your present application.

Are the basic application requirements documented? (Y/N)

Answer: No

The docs included in the listed address fully document the web three interface with the smart contracts. However, they do not document the smart contracts themselves. As such, we cannot give this a yes.

Location: https://docs.instadapp.io/

How to improve this score

Write the document based on the deployed code. For guidance, refer to the SecurEth System Description Document.

Do the requirements fully (100%) cover the deployed contracts? (%)

Answer: 0%

For the reasons listed above, there is no documentation on the Smart contracts.

How to improve this score

This score can improve by adding content to the requirements document such that it comprehensively covers the requirements. For guidance, refer to the SecurEth System Description Document . Using tools that aid traceability detection will help.

Are there sufficiently detailed comments for all functions within the deployed contract code (%)

Answer: 60%

Code examples are in the Appendix. As per the SLOC, there is 49% commenting to code. NatSpec commenting was used (though never put into the documentation).

How to improve this score

This score can improve by adding comments to the deployed code such that it comprehensively covers the code. For guidance, refer to the SecurEth Software Requirements.

Is it possible to trace requirements to the implementation in code (%)

Answer: 0%

As there is no documentation for the code to trace to, this answer is clear.

Guidance: 100% - Clear explicit traceability between code and documentation at a requirement level for all code 60% - Clear association between code and documents via non explicit traceability 40% - Documentation lists all the functions and describes their functions 0% - No connection between documentation and code

How to improve this score

This score can improve by adding traceability from requirements to code such that it is clear where each requirement is coded. For reference, check the SecurEth guidelines on traceability.

Testing

This section looks at the software testing available. It is explained in this document. This section answers the following questions;

  1. Full test suite (Covers all the deployed code) (%)

  2. Code coverage (Covers all the deployed lines of code, or explains misses) (%)

  3. Scripts and instructions to run the tests (Y/N)

  4. Packaged with the deployed code (Y/N)

  5. Report of the results (%)

  6. Formal Verification test done (%)

  7. Stress Testing environment (%)

Is there a Full test suite? (%)

Answer: 100%

Full test set with 96&% test to code ratio.

Code coverage (Covers all the deployed lines of code, or explains misses) (%)

Answer: 50%

While there are clearly tests, and even tests for coverage (as per the readme) but no evidence of results, so the 50% score is used.

Guidance: 100% - Documented full coverage 99-51% - Value of test coverage from documented results 50% - No indication of code coverage but clearly there is a reasonably complete set of tests 30% - Some tests evident but not complete 0% - No test for coverage seen

How to improve this score

This score can improve by adding tests achieving full code coverage. A clear report and scripts in the software repository will guarantee a high score.

Scripts and instructions to run the tests (Y/N)

Answer: Yes

In the GitHub readme

Packaged with the deployed code (Y/N)

Answer: Y

Report of the results (%)

Answer: No

No test report was evident.

How to improve this score

Add a report with the results. The test scripts should generate the report or elements of it.

Formal Verification test done (%)

Answer: 0%

No evidence of formal verification.

Stress Testing environment (%)

Answer: 0%

No evidence of separate active test networks.

Audits

Answer: 100%

Two audits are mentioned in the security section of the docs. The audit from samcunz is brief, but his reputation is very strong. The audit from peckshield was corrupt, but we received a copy so score revised to 100%

Guidance:

  1. Multiple Audits performed before deployment and results public and implemented or not required (100%)

  2. Single audit performed before deployment and results public and implemented or not required (90%)

  3. Audit(s) performed after deployment and no changes required. Audit report is public. (70%)

  4. No audit performed (20%)

  5. Audit Performed after deployment, existence is public, report is not public and no improvements deployed (0%)

Appendices

Author Details

The author of this audit is Rex of Caliburn Consulting.

Email : rex@defisafety.com Twitter : @defisafety

I started with Ethereum just before the DAO and that was a wonderful education. It showed the importance of code quality. The second Parity hack also showed the importance of good process. Here my aviation background offers some value. Aerospace knows how to make reliable code using quality processes.

I was coaxed to go to EthDenver 2018 and there I started SecuEth.org with Bryant and Roman. We created guidelines on good processes for blockchain code development. We got EthFoundation funding to assist in their development.

Process Quality Audits are an extension of the SecurEth guidelines that will further increase the quality processes in Solidity and Vyper development.

Career wise I am a business development manager for an avionics supplier.

Scoring Appendix

Code Used Appendix

Example Code Appendix

contract InstaAccount is Record {
event LogCast(address indexed origin, address indexed sender, uint value);
receive() external payable {}
/**
* @dev Delegate the calls to Connector And this function is ran by cast().
* @param _target Target to of Connector.
* @param _data CallData of function in Connector.
*/
function spell(address _target, bytes memory _data) internal {
require(_target != address(0), "target-invalid");
assembly {
let succeeded := delegatecall(gas(), _target, add(_data, 0x20), mload(_data), 0, 0)
switch iszero(succeeded)
case 1 {
// throw if delegatecall failed
let size := returndatasize()
returndatacopy(0x00, 0x00, size)
revert(0x00, size)
}
}
}
/**
* @dev This is the main function, Where all the different functions are called
* from Smart Account.
* @param _targets Array of Target(s) to of Connector.
* @param _datas Array of Calldata(S) of function.
*/
function cast(
address[] calldata _targets,
bytes[] calldata _datas,
address _origin
)
external
payable
{
require(isAuth(msg.sender) || msg.sender == instaIndex, "permission-denied");
require(_targets.length == _datas.length , "array-length-invalid");
IndexInterface indexContract = IndexInterface(instaIndex);
bool isShield = shield;
if (!isShield) {
require(ConnectorsInterface(indexContract.connectors(version)).isConnector(_targets), "not-connector");
} else {
require(ConnectorsInterface(indexContract.connectors(version)).isStaticConnector(_targets), "not-static-connector");
}
for (uint i = 0; i < _targets.length; i++) {
spell(_targets[i], _datas[i]);
}
address _check = indexContract.check(version);
if (_check != address(0) && !isShield) require(CheckInterface(_check).isOk(), "not-ok");
emit LogCast(_origin, msg.sender, msg.value);
}
}

SLOC Appendix

Solidity Contracts

Language

Files

Lines

Blanks

Comments

Code

Complexity

Solidity

9

1058

167

291

600

94

Comments to Code 291/ 600 = 49%

Javascript Tests

Language

Files

Lines

Blanks

Comments

Code

Complexity

JavaScript

3

700

92

44

564

0

Tests to Code 564 / 600 = 94%